Top

tensorbuilder.api.applicative module

from classes import Applicative

__all__ = ["Applicative"]

Classes

class Applicative

docstring for Applicative

class Applicative(ApplicativeBase):
    """docstring for Applicative"""
    def __init__(self, f):
        super(Applicative, self).__init__(f)
    def Builder(self, tensor):
        return Builder(tensor)

Ancestors (in MRO)

  • Applicative
  • tensorbuilder.core.applicative.ApplicativeBase
  • __builtin__.object

Methods

def __init__(

self, f)

def __init__(self, f):
    super(Applicative, self).__init__(f)

def Assert(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.Assert, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.Assert

Return

Applicative

Origial documentation for Builder.Assert

def Assert(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.Assert to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.Assert

def Assert(condition, data, summarize=None, name=None)

Asserts that the given condition is true.

If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print.

NOTE: To ensure that Assert executes, one usually attaches a dependency:

python # Ensure maximum element of x is smaller or equal to 1 assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x]) x = tf.with_dependencies([assert_op], x)

Args: condition: The condition to evaluate. data: The tensors to print out when condition is false. summarize: Print this many entries of each tensor. name: A name for this operation (optional).

Returns: assert_op: An Operation that, when executed, raises a tf.errors.InvalidArgumentError if condition is not true.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def Assert_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.Assert_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.Assert_layer

Return

Applicative

Origial documentation for Builder.Assert_layer

def Assert_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.Assert, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.Assert

def Assert(condition, data, summarize=None, name=None):

Asserts that the given condition is true.

If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print.

NOTE: To ensure that Assert executes, one usually attaches a dependency:

python # Ensure maximum element of x is smaller or equal to 1 assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x]) x = tf.with_dependencies([assert_op], x)

Args: condition: The condition to evaluate. data: The tensors to print out when condition is false. summarize: Print this many entries of each tensor. name: A name for this operation (optional).

Returns: assert_op: An Operation that, when executed, raises a tf.errors.InvalidArgumentError if condition is not true.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def Builder(

self, tensor)

def Builder(self, tensor):
    return Builder(tensor)

def BuilderTree(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.BuilderTree, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.BuilderTree

Return

Applicative

Origial documentation for Builder.BuilderTree

def BuilderTree(self, builder_iterable):

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def NoGradient_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.NoGradient_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.NoGradient_layer

Return

Applicative

Origial documentation for Builder.NoGradient_layer

def NoGradient_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.NoGradient, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.NoGradient

def NotDifferentiable(op_type):

Specifies that ops of type op_type is not differentiable.

This function should not be used for operations that have a well-defined gradient that is not yet implemented.

This function is only used when defining a new op type. It may be used for ops such as tf.size() that are not differentiable. For example:

python tf.NotDifferentiable("Size")

The gradient computed for 'op_type' will then propagate zeros.

For ops that have a well-defined gradient but are not yet implemented, no declaration should be made, and an error must be thrown if an attempt to request its gradient is made.

Args: op_type: The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.

Raises: TypeError: If op_type is not a string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def NotDifferentiable(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.NotDifferentiable, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.NotDifferentiable

Return

Applicative

Origial documentation for Builder.NotDifferentiable

def NotDifferentiable(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.NotDifferentiable to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.NotDifferentiable

def NotDifferentiable(op_type)

Specifies that ops of type op_type is not differentiable.

This function should not be used for operations that have a well-defined gradient that is not yet implemented.

This function is only used when defining a new op type. It may be used for ops such as tf.size() that are not differentiable. For example:

python tf.NotDifferentiable("Size")

The gradient computed for 'op_type' will then propagate zeros.

For ops that have a well-defined gradient but are not yet implemented, no declaration should be made, and an error must be thrown if an attempt to request its gradient is made.

Args: op_type: The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.

Raises: TypeError: If op_type is not a string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def NotDifferentiable_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.NotDifferentiable_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.NotDifferentiable_layer

Return

Applicative

Origial documentation for Builder.NotDifferentiable_layer

def NotDifferentiable_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.NotDifferentiable, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.NotDifferentiable

def NotDifferentiable(op_type):

Specifies that ops of type op_type is not differentiable.

This function should not be used for operations that have a well-defined gradient that is not yet implemented.

This function is only used when defining a new op type. It may be used for ops such as tf.size() that are not differentiable. For example:

python tf.NotDifferentiable("Size")

The gradient computed for 'op_type' will then propagate zeros.

For ops that have a well-defined gradient but are not yet implemented, no declaration should be made, and an error must be thrown if an attempt to request its gradient is made.

Args: op_type: The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.

Raises: TypeError: If op_type is not a string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def Print(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.Print, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.Print

Return

Applicative

Origial documentation for Builder.Print

def Print(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.Print to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.Print

def Print(input_, data, message=None, first_n=None, summarize=None, name=None)

Prints a list of tensors.

This is an identity op with the side effect of printing data when evaluating.

Args: input_: A tensor passed through this op. data: A list of tensors to print out when op is evaluated. message: A string, prefix of the error message. first_n: Only log first_n number of times. Negative numbers log always; this is the default. summarize: Only print this many entries of each tensor. If None, then a maximum of 3 elements are printed per input tensor. name: A name for the operation (optional).

Returns: Same tensor as input_.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def Print_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.Print_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.Print_layer

Return

Applicative

Origial documentation for Builder.Print_layer

def Print_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.Print, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.Print

def Print(input_, data, message=None, first_n=None, summarize=None, name=None):

Prints a list of tensors.

This is an identity op with the side effect of printing data when evaluating.

Args: input_: A tensor passed through this op. data: A list of tensors to print out when op is evaluated. message: A string, prefix of the error message. first_n: Only log first_n number of times. Negative numbers log always; this is the default. summarize: Only print this many entries of each tensor. If None, then a maximum of 3 elements are printed per input tensor. name: A name for the operation (optional).

Returns: Same tensor as input_.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def abs(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.abs, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.abs

Return

Applicative

Origial documentation for Builder.abs

def abs(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.abs to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.abs

def abs(x, name=None)

Computes the absolute value of a tensor.

Given a tensor of real numbers x, this operation returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\).

See tf.complex_abs() to compute the absolute value of a complex number.

Args: x: A Tensor or SparseTensor of type float32, float64, int32, or int64. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor the same size and type as x with absolute values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def abs_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.abs_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.abs_layer

Return

Applicative

Origial documentation for Builder.abs_layer

def abs_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.abs, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.abs

def abs(x, name=None):

Computes the absolute value of a tensor.

Given a tensor of real numbers x, this operation returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\).

See tf.complex_abs() to compute the absolute value of a complex number.

Args: x: A Tensor or SparseTensor of type float32, float64, int32, or int64. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor the same size and type as x with absolute values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def accumulate_n(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.accumulate_n, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.accumulate_n

Return

Applicative

Origial documentation for Builder.accumulate_n

def accumulate_n(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.accumulate_n to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.accumulate_n

def accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)

Returns the element-wise sum of a list of tensors.

Optionally, pass shape and tensor_dtype for shape and type checking, otherwise, these are inferred.

NOTE: This operation is not differentiable and cannot be used if inputs depend on trainable variables. Please use tf.add_n for such cases.

For example:

```python

tensor 'a' is [[1, 2], [3, 4]]

tensor b is [[5, 0], [0, 6]]

tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]

Explicitly pass shape and type

tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) ==> [[7, 4], [6, 14]] ```

Args: inputs: A list of Tensor objects, each with same shape and type. shape: Shape of elements of inputs. tensor_dtype: The type of inputs. name: A name for the operation (optional).

Returns: A Tensor of same shape and type as the elements of inputs.

Raises: ValueError: If inputs don't all have same shape and dtype or the shape cannot be inferred.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def accumulate_n_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.accumulate_n_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.accumulate_n_layer

Return

Applicative

Origial documentation for Builder.accumulate_n_layer

def accumulate_n_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.accumulate_n, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.accumulate_n

def accumulate_n(inputs, shape=None, tensor_dtype=None, name=None):

Returns the element-wise sum of a list of tensors.

Optionally, pass shape and tensor_dtype for shape and type checking, otherwise, these are inferred.

NOTE: This operation is not differentiable and cannot be used if inputs depend on trainable variables. Please use tf.add_n for such cases.

For example:

```python

tensor 'a' is [[1, 2], [3, 4]]

tensor b is [[5, 0], [0, 6]]

tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]

Explicitly pass shape and type

tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) ==> [[7, 4], [6, 14]] ```

Args: inputs: A list of Tensor objects, each with same shape and type. shape: Shape of elements of inputs. tensor_dtype: The type of inputs. name: A name for the operation (optional).

Returns: A Tensor of same shape and type as the elements of inputs.

Raises: ValueError: If inputs don't all have same shape and dtype or the shape cannot be inferred.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def acos(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.acos, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.acos

Return

Applicative

Origial documentation for Builder.acos

def acos(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.acos to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.acos

def acos(x, name=None)

Computes acos of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def acos_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.acos_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.acos_layer

Return

Applicative

Origial documentation for Builder.acos_layer

def acos_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.acos, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.acos

def acos(x, name=None):

Computes acos of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add

Return

Applicative

Origial documentation for Builder.add

def add(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.add to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.add

def add(x, y, name=None)

Returns x + y element-wise.

NOTE: Add supports broadcasting. AddN does not. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_check_numerics_ops(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_check_numerics_ops, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_check_numerics_ops

Return

Applicative

Origial documentation for Builder.add_check_numerics_ops

def add_check_numerics_ops(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.add_check_numerics_ops to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.add_check_numerics_ops

def add_check_numerics_ops()

Connect a check_numerics to every floating point tensor.

check_numerics operations themselves are added for each half, float, or double tensor in the graph. For all ops in the graph, the check_numerics op for all of its (half, float, or double) inputs is guaranteed to run before the check_numerics op on any of its outputs.

Returns: A group op depending on all check_numerics ops added.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_check_numerics_ops_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_check_numerics_ops_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_check_numerics_ops_layer

Return

Applicative

Origial documentation for Builder.add_check_numerics_ops_layer

def add_check_numerics_ops_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.add_check_numerics_ops, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.add_check_numerics_ops

def add_check_numerics_ops():

Connect a check_numerics to every floating point tensor.

check_numerics operations themselves are added for each half, float, or double tensor in the graph. For all ops in the graph, the check_numerics op for all of its (half, float, or double) inputs is guaranteed to run before the check_numerics op on any of its outputs.

Returns: A group op depending on all check_numerics ops added.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_layer

Return

Applicative

Origial documentation for Builder.add_layer

def add_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.add, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.add

def add(x, y, name=None):

Returns x + y element-wise.

NOTE: Add supports broadcasting. AddN does not. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_n(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_n, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_n

Return

Applicative

Origial documentation for Builder.add_n

def add_n(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.add_n to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.add_n

def add_n(inputs, name=None)

Adds all input tensors element-wise.

Args: inputs: A list of Tensor objects, each with same shape and type. name: A name for the operation (optional).

Returns: A Tensor of same shape and type as the elements of inputs.

Raises: ValueError: If inputs don't all have same shape and dtype or the shape cannot be inferred.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_n_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_n_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_n_layer

Return

Applicative

Origial documentation for Builder.add_n_layer

def add_n_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.add_n, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.add_n

def add_n(inputs, name=None):

Adds all input tensors element-wise.

Args: inputs: A list of Tensor objects, each with same shape and type. name: A name for the operation (optional).

Returns: A Tensor of same shape and type as the elements of inputs.

Raises: ValueError: If inputs don't all have same shape and dtype or the shape cannot be inferred.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_regularization_loss(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_regularization_loss, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_regularization_loss

Return

Applicative

Origial documentation for Builder.add_regularization_loss

def add_regularization_loss(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tensorbuilder.add_regularization_loss to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tensorbuilder.add_regularization_loss

def add_regularization_loss(tensor, graph=None, scope="add_regularization_loss")

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_to_collection(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_to_collection, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_to_collection

Return

Applicative

Origial documentation for Builder.add_to_collection

def add_to_collection(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.add_to_collection to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.add_to_collection

def add_to_collection(name, value)

Wrapper for Graph.add_to_collection() using the default graph.

See Graph.add_to_collection() for more details.

Args: name: The key for the collection. For example, the GraphKeys class contains many standard names for collections. value: The value to add to the collection.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def add_to_collection_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.add_to_collection_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.add_to_collection_layer

Return

Applicative

Origial documentation for Builder.add_to_collection_layer

def add_to_collection_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.add_to_collection, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.add_to_collection

def add_to_collection(name, value):

Wrapper for Graph.add_to_collection() using the default graph.

See Graph.add_to_collection() for more details.

Args: name: The key for the collection. For example, the GraphKeys class contains many standard names for collections. value: The value to add to the collection.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def all_candidate_sampler(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.all_candidate_sampler, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.all_candidate_sampler

Return

Applicative

Origial documentation for Builder.all_candidate_sampler

def all_candidate_sampler(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.all_candidate_sampler to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.all_candidate_sampler

def all_candidate_sampler(true_classes, num_true, num_sampled, unique, seed=None, name=None)

Generate the set of all classes.

Deterministically generates and returns the set of all possible classes. For testing purposes. There is no need to use this, since you might as well use full softmax or full logistic regression.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of possible classes. unique: A bool. Ignored. unique. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. This operation deterministically returns the entire range [0, num_sampled]. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. All returned values are 1.0. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates. All returned values are 1.0.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def all_candidate_sampler_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.all_candidate_sampler_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.all_candidate_sampler_layer

Return

Applicative

Origial documentation for Builder.all_candidate_sampler_layer

def all_candidate_sampler_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.all_candidate_sampler, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.all_candidate_sampler

def all_candidate_sampler(true_classes, num_true, num_sampled, unique, seed=None, name=None):

Generate the set of all classes.

Deterministically generates and returns the set of all possible classes. For testing purposes. There is no need to use this, since you might as well use full softmax or full logistic regression.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of possible classes. unique: A bool. Ignored. unique. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. This operation deterministically returns the entire range [0, num_sampled]. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. All returned values are 1.0. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates. All returned values are 1.0.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def all_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.all_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.all_variables

Return

Applicative

Origial documentation for Builder.all_variables

def all_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.all_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.all_variables

def all_variables()

Returns all variables that must be saved/restored.

The Variable() constructor automatically adds new variables to the graph collection GraphKeys.VARIABLES. This convenience function returns the contents of that collection.

Returns: A list of Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def all_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.all_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.all_variables_layer

Return

Applicative

Origial documentation for Builder.all_variables_layer

def all_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.all_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.all_variables

def all_variables():

Returns all variables that must be saved/restored.

The Variable() constructor automatically adds new variables to the graph collection GraphKeys.VARIABLES. This convenience function returns the contents of that collection.

Returns: A list of Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def arg_max(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.arg_max, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.arg_max

Return

Applicative

Origial documentation for Builder.arg_max

def arg_max(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.arg_max to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.arg_max

def arg_max(input, dimension, name=None)

Returns the index with the largest value across dimensions of a tensor.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def arg_max_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.arg_max_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.arg_max_layer

Return

Applicative

Origial documentation for Builder.arg_max_layer

def arg_max_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.arg_max, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.arg_max

def arg_max(input, dimension, name=None):

Returns the index with the largest value across dimensions of a tensor.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def arg_min(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.arg_min, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.arg_min

Return

Applicative

Origial documentation for Builder.arg_min

def arg_min(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.arg_min to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.arg_min

def arg_min(input, dimension, name=None)

Returns the index with the smallest value across dimensions of a tensor.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def arg_min_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.arg_min_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.arg_min_layer

Return

Applicative

Origial documentation for Builder.arg_min_layer

def arg_min_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.arg_min, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.arg_min

def arg_min(input, dimension, name=None):

Returns the index with the smallest value across dimensions of a tensor.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def argmax_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.argmax_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.argmax_layer

Return

Applicative

Origial documentation for Builder.argmax_layer

def argmax_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.argmax, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.argmax

def arg_max(input, dimension, name=None):

Returns the index with the largest value across dimensions of a tensor.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def argmin_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.argmin_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.argmin_layer

Return

Applicative

Origial documentation for Builder.argmin_layer

def argmin_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.argmin, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.argmin

def arg_min(input, dimension, name=None):

Returns the index with the smallest value across dimensions of a tensor.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def as_dtype(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.as_dtype, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.as_dtype

Return

Applicative

Origial documentation for Builder.as_dtype

def as_dtype(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.as_dtype to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.as_dtype

def as_dtype(type_value)

Converts the given type_value to a DType.

Args: type_value: A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a DataType enum, a string type name, or a numpy.dtype.

Returns: A DType corresponding to type_value.

Raises: TypeError: If type_value cannot be converted to a DType.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def as_dtype_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.as_dtype_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.as_dtype_layer

Return

Applicative

Origial documentation for Builder.as_dtype_layer

def as_dtype_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.as_dtype, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.as_dtype

def as_dtype(type_value):

Converts the given type_value to a DType.

Args: type_value: A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a DataType enum, a string type name, or a numpy.dtype.

Returns: A DType corresponding to type_value.

Raises: TypeError: If type_value cannot be converted to a DType.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def as_string(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.as_string, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.as_string

Return

Applicative

Origial documentation for Builder.as_string

def as_string(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.as_string to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.as_string

def as_string(input, precision=None, scientific=None, shortest=None, width=None, fill=None, name=None)

Converts each entry in the given tensor to strings. Supports many numeric

types and boolean.

Args: input: A Tensor. Must be one of the following types: int32, int64, complex64, float32, float64, bool, int8. precision: An optional int. Defaults to -1. The post-decimal precision to use for floating point numbers. Only used if precision > -1. scientific: An optional bool. Defaults to False. Use scientific notation for floating point numbers. shortest: An optional bool. Defaults to False. Use shortest representation (either scientific or standard) for floating point numbers. width: An optional int. Defaults to -1. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1. fill: An optional string. Defaults to "". The value to pad if width > -1. If empty, pads with spaces. Another typical value is '0'. String cannot be longer than 1 character. name: A name for the operation (optional).

Returns: A Tensor of type string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def as_string_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.as_string_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.as_string_layer

Return

Applicative

Origial documentation for Builder.as_string_layer

def as_string_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.as_string, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.as_string

def as_string(input, precision=None, scientific=None, shortest=None, width=None, fill=None, name=None):

Converts each entry in the given tensor to strings. Supports many numeric

types and boolean.

Args: input: A Tensor. Must be one of the following types: int32, int64, complex64, float32, float64, bool, int8. precision: An optional int. Defaults to -1. The post-decimal precision to use for floating point numbers. Only used if precision > -1. scientific: An optional bool. Defaults to False. Use scientific notation for floating point numbers. shortest: An optional bool. Defaults to False. Use shortest representation (either scientific or standard) for floating point numbers. width: An optional int. Defaults to -1. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1. fill: An optional string. Defaults to "". The value to pad if width > -1. If empty, pads with spaces. Another typical value is '0'. String cannot be longer than 1 character. name: A name for the operation (optional).

Returns: A Tensor of type string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def asin(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.asin, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.asin

Return

Applicative

Origial documentation for Builder.asin

def asin(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.asin to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.asin

def asin(x, name=None)

Computes asin of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def asin_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.asin_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.asin_layer

Return

Applicative

Origial documentation for Builder.asin_layer

def asin_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.asin, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.asin

def asin(x, name=None):

Computes asin of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_equal

Return

Applicative

Origial documentation for Builder.assert_equal

def assert_equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_equal

def assert_equal(x, y, data=None, summarize=None, message=None, name=None)

Assert the condition x == y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_equal(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_equal(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] == y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_equal".

Returns: Op that raises InvalidArgumentError if x == y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_equal_layer

Return

Applicative

Origial documentation for Builder.assert_equal_layer

def assert_equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_equal

def assert_equal(x, y, data=None, summarize=None, message=None, name=None):

Assert the condition x == y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_equal(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_equal(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] == y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_equal".

Returns: Op that raises InvalidArgumentError if x == y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_greater(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_greater, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_greater

Return

Applicative

Origial documentation for Builder.assert_greater

def assert_greater(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_greater to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_greater

def assert_greater(x, y, data=None, summarize=None, message=None, name=None)

Assert the condition x > y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_greater(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_greater(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] > y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_greater".

Returns: Op that raises InvalidArgumentError if x > y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_greater_equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_greater_equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_greater_equal

Return

Applicative

Origial documentation for Builder.assert_greater_equal

def assert_greater_equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_greater_equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_greater_equal

def assert_greater_equal(x, y, data=None, summarize=None, message=None, name=None)

Assert the condition x >= y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_greater_equal(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_greater_equal(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] >= y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_greater_equal"

Returns: Op that raises InvalidArgumentError if x >= y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_greater_equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_greater_equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_greater_equal_layer

Return

Applicative

Origial documentation for Builder.assert_greater_equal_layer

def assert_greater_equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_greater_equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_greater_equal

def assert_greater_equal(x, y, data=None, summarize=None, message=None, name=None):

Assert the condition x >= y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_greater_equal(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_greater_equal(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] >= y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_greater_equal"

Returns: Op that raises InvalidArgumentError if x >= y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_greater_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_greater_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_greater_layer

Return

Applicative

Origial documentation for Builder.assert_greater_layer

def assert_greater_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_greater, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_greater

def assert_greater(x, y, data=None, summarize=None, message=None, name=None):

Assert the condition x > y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_greater(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_greater(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] > y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_greater".

Returns: Op that raises InvalidArgumentError if x > y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_integer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_integer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_integer

Return

Applicative

Origial documentation for Builder.assert_integer

def assert_integer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_integer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_integer

def assert_integer(x, message=None, name=None)

Assert that x is of integer dtype.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_integer(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_integer(x)], x)

Args: x: Tensor whose basetype is integer and is not quantized. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_integer".

Raises: TypeError: If x.dtype is anything other than non-quantized integer.

Returns: A no_op that does nothing. Type can be determined statically.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_integer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_integer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_integer_layer

Return

Applicative

Origial documentation for Builder.assert_integer_layer

def assert_integer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_integer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_integer

def assert_integer(x, message=None, name=None):

Assert that x is of integer dtype.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_integer(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_integer(x)], x)

Args: x: Tensor whose basetype is integer and is not quantized. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_integer".

Raises: TypeError: If x.dtype is anything other than non-quantized integer.

Returns: A no_op that does nothing. Type can be determined statically.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_less(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_less, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_less

Return

Applicative

Origial documentation for Builder.assert_less

def assert_less(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_less to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_less

def assert_less(x, y, data=None, summarize=None, message=None, name=None)

Assert the condition x < y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_less(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_less(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] < y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_less".

Returns: Op that raises InvalidArgumentError if x < y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_less_equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_less_equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_less_equal

Return

Applicative

Origial documentation for Builder.assert_less_equal

def assert_less_equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_less_equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_less_equal

def assert_less_equal(x, y, data=None, summarize=None, message=None, name=None)

Assert the condition x <= y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_less_equal(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_less_equal(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] <= y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_less_equal"

Returns: Op that raises InvalidArgumentError if x <= y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_less_equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_less_equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_less_equal_layer

Return

Applicative

Origial documentation for Builder.assert_less_equal_layer

def assert_less_equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_less_equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_less_equal

def assert_less_equal(x, y, data=None, summarize=None, message=None, name=None):

Assert the condition x <= y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_less_equal(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_less_equal(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] <= y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_less_equal"

Returns: Op that raises InvalidArgumentError if x <= y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_less_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_less_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_less_layer

Return

Applicative

Origial documentation for Builder.assert_less_layer

def assert_less_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_less, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_less

def assert_less(x, y, data=None, summarize=None, message=None, name=None):

Assert the condition x < y holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_less(x, y)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_less(x, y)], x)

This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] < y[i]. If both x and y are empty, this is trivially satisfied.

Args: x: Numeric Tensor. y: Numeric Tensor, same dtype as and broadcastable to x. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_less".

Returns: Op that raises InvalidArgumentError if x < y is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_negative(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_negative, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_negative

Return

Applicative

Origial documentation for Builder.assert_negative

def assert_negative(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_negative to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_negative

def assert_negative(x, data=None, summarize=None, message=None, name=None)

Assert the condition x < 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_negative(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_negative(x)], x)

Negative means, for every element x[i] of x, we have x[i] < 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_negative".

Returns: Op raising InvalidArgumentError unless x is all negative.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_negative_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_negative_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_negative_layer

Return

Applicative

Origial documentation for Builder.assert_negative_layer

def assert_negative_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_negative, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_negative

def assert_negative(x, data=None, summarize=None, message=None, name=None):

Assert the condition x < 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_negative(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_negative(x)], x)

Negative means, for every element x[i] of x, we have x[i] < 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_negative".

Returns: Op raising InvalidArgumentError unless x is all negative.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_non_negative(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_non_negative, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_non_negative

Return

Applicative

Origial documentation for Builder.assert_non_negative

def assert_non_negative(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_non_negative to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_non_negative

def assert_non_negative(x, data=None, summarize=None, message=None, name=None)

Assert the condition x >= 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_non_negative(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_non_negative(x)], x)

Non-negative means, for every element x[i] of x, we have x[i] >= 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_non_negative".

Returns: Op raising InvalidArgumentError unless x is all non-negative.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_non_negative_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_non_negative_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_non_negative_layer

Return

Applicative

Origial documentation for Builder.assert_non_negative_layer

def assert_non_negative_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_non_negative, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_non_negative

def assert_non_negative(x, data=None, summarize=None, message=None, name=None):

Assert the condition x >= 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_non_negative(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_non_negative(x)], x)

Non-negative means, for every element x[i] of x, we have x[i] >= 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_non_negative".

Returns: Op raising InvalidArgumentError unless x is all non-negative.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_non_positive(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_non_positive, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_non_positive

Return

Applicative

Origial documentation for Builder.assert_non_positive

def assert_non_positive(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_non_positive to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_non_positive

def assert_non_positive(x, data=None, summarize=None, message=None, name=None)

Assert the condition x <= 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_non_positive(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_non_positive(x)], x)

Non-positive means, for every element x[i] of x, we have x[i] <= 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_non_positive".

Returns: Op raising InvalidArgumentError unless x is all non-positive.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_non_positive_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_non_positive_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_non_positive_layer

Return

Applicative

Origial documentation for Builder.assert_non_positive_layer

def assert_non_positive_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_non_positive, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_non_positive

def assert_non_positive(x, data=None, summarize=None, message=None, name=None):

Assert the condition x <= 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_non_positive(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_non_positive(x)], x)

Non-positive means, for every element x[i] of x, we have x[i] <= 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_non_positive".

Returns: Op raising InvalidArgumentError unless x is all non-positive.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_positive(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_positive, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_positive

Return

Applicative

Origial documentation for Builder.assert_positive

def assert_positive(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_positive to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_positive

def assert_positive(x, data=None, summarize=None, message=None, name=None)

Assert the condition x > 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_positive(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_positive(x)], x)

Positive means, for every element x[i] of x, we have x[i] > 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_positive".

Returns: Op raising InvalidArgumentError unless x is all positive.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_positive_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_positive_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_positive_layer

Return

Applicative

Origial documentation for Builder.assert_positive_layer

def assert_positive_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_positive, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_positive

def assert_positive(x, data=None, summarize=None, message=None, name=None):

Assert the condition x > 0 holds element-wise.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_positive(x)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_positive(x)], x)

Positive means, for every element x[i] of x, we have x[i] > 0. If x is empty this is trivially satisfied.

Args: x: Numeric Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_positive".

Returns: Op raising InvalidArgumentError unless x is all positive.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_proper_iterable(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_proper_iterable, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_proper_iterable

Return

Applicative

Origial documentation for Builder.assert_proper_iterable

def assert_proper_iterable(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_proper_iterable to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_proper_iterable

def assert_proper_iterable(values)

Static assert that values is a "proper" iterable.

Ops that expect iterables of Tensor can call this to validate input. Useful since Tensor, ndarray, byte/text type are all iterables themselves.

Args: values: Object to be checked.

Raises: TypeError: If values is not iterable or is one of Tensor, SparseTensor, np.array, tf.compat.bytes_or_text_types.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_proper_iterable_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_proper_iterable_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_proper_iterable_layer

Return

Applicative

Origial documentation for Builder.assert_proper_iterable_layer

def assert_proper_iterable_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_proper_iterable, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_proper_iterable

def assert_proper_iterable(values):

Static assert that values is a "proper" iterable.

Ops that expect iterables of Tensor can call this to validate input. Useful since Tensor, ndarray, byte/text type are all iterables themselves.

Args: values: Object to be checked.

Raises: TypeError: If values is not iterable or is one of Tensor, SparseTensor, np.array, tf.compat.bytes_or_text_types.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_rank(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_rank, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_rank

Return

Applicative

Origial documentation for Builder.assert_rank

def assert_rank(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_rank to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_rank

def assert_rank(x, rank, data=None, summarize=None, message=None, name=None)

Assert x has rank equal to rank.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_rank(x, 2)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_rank(x, 2)], x)

Args: x: Numeric Tensor. rank: Scalar integer Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_rank".

Returns: Op raising InvalidArgumentError unless x has specified rank. If static checks determine x has correct rank, a no_op is returned.

Raises: ValueError: If static checks determine x has wrong rank.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_rank_at_least(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_rank_at_least, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_rank_at_least

Return

Applicative

Origial documentation for Builder.assert_rank_at_least

def assert_rank_at_least(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_rank_at_least to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_rank_at_least

def assert_rank_at_least(x, rank, data=None, summarize=None, message=None, name=None)

Assert x has rank equal to rank or higher.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_rank_at_least(x, 2)], x)

Args: x: Numeric Tensor. rank: Scalar Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_rank_at_least".

Returns: Op raising InvalidArgumentError unless x has specified rank or higher. If static checks determine x has correct rank, a no_op is returned.

Raises: ValueError: If static checks determine x has wrong rank.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_rank_at_least_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_rank_at_least_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_rank_at_least_layer

Return

Applicative

Origial documentation for Builder.assert_rank_at_least_layer

def assert_rank_at_least_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_rank_at_least, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_rank_at_least

def assert_rank_at_least(x, rank, data=None, summarize=None, message=None, name=None):

Assert x has rank equal to rank or higher.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_rank_at_least(x, 2)], x)

Args: x: Numeric Tensor. rank: Scalar Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_rank_at_least".

Returns: Op raising InvalidArgumentError unless x has specified rank or higher. If static checks determine x has correct rank, a no_op is returned.

Raises: ValueError: If static checks determine x has wrong rank.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_rank_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_rank_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_rank_layer

Return

Applicative

Origial documentation for Builder.assert_rank_layer

def assert_rank_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_rank, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_rank

def assert_rank(x, rank, data=None, summarize=None, message=None, name=None):

Assert x has rank equal to rank.

Example of adding a dependency to an operation:

python with tf.control_dependencies([tf.assert_rank(x, 2)]): output = tf.reduce_sum(x)

Example of adding dependency to the tensor being checked:

python x = tf.with_dependencies([tf.assert_rank(x, 2)], x)

Args: x: Numeric Tensor. rank: Scalar integer Tensor. data: The tensors to print out if the condition is False. Defaults to error message and first few entries of x. summarize: Print this many entries of each tensor. message: A string to prefix to the default message. name: A name for this operation (optional). Defaults to "assert_rank".

Returns: Op raising InvalidArgumentError unless x has specified rank. If static checks determine x has correct rank, a no_op is returned.

Raises: ValueError: If static checks determine x has wrong rank.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_type(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_type, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_type

Return

Applicative

Origial documentation for Builder.assert_type

def assert_type(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_type to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_type

def assert_type(tensor, tf_type, message=None, name=None)

Statically asserts that the given Tensor is of the specified type.

Args: tensor: A tensorflow Tensor. tf_type: A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc). message: A string to prefix to the default message. name: A name to give this Op. Defaults to "assert_type"

Raises: TypeError: If the tensors data type doesn't match tf_type.

Returns: A no_op that does nothing. Type can be determined statically.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_type_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_type_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_type_layer

Return

Applicative

Origial documentation for Builder.assert_type_layer

def assert_type_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_type, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_type

def assert_type(tensor, tf_type, message=None, name=None):

Statically asserts that the given Tensor is of the specified type.

Args: tensor: A tensorflow Tensor. tf_type: A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc). message: A string to prefix to the default message. name: A name to give this Op. Defaults to "assert_type"

Raises: TypeError: If the tensors data type doesn't match tf_type.

Returns: A no_op that does nothing. Type can be determined statically.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_variables_initialized(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_variables_initialized, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_variables_initialized

Return

Applicative

Origial documentation for Builder.assert_variables_initialized

def assert_variables_initialized(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assert_variables_initialized to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assert_variables_initialized

def assert_variables_initialized(var_list=None)

Returns an Op to check if variables are initialized.

NOTE: This function is obsolete and will be removed in 6 months. Please change your implementation to use report_uninitialized_variables().

When run, the returned Op will raise the exception FailedPreconditionError if any of the variables has not yet been initialized.

Note: This function is implemented by trying to fetch the values of the variables. If one of the variables is not initialized a message may be logged by the C++ runtime. This is expected.

Args: var_list: List of Variable objects to check. Defaults to the value of all_variables().

Returns: An Op, or None if there are no variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assert_variables_initialized_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assert_variables_initialized_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assert_variables_initialized_layer

Return

Applicative

Origial documentation for Builder.assert_variables_initialized_layer

def assert_variables_initialized_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assert_variables_initialized, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assert_variables_initialized

def assert_variables_initialized(var_list=None):

Returns an Op to check if variables are initialized.

NOTE: This function is obsolete and will be removed in 6 months. Please change your implementation to use report_uninitialized_variables().

When run, the returned Op will raise the exception FailedPreconditionError if any of the variables has not yet been initialized.

Note: This function is implemented by trying to fetch the values of the variables. If one of the variables is not initialized a message may be logged by the C++ runtime. This is expected.

Args: var_list: List of Variable objects to check. Defaults to the value of all_variables().

Returns: An Op, or None if there are no variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assign(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assign, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assign

Return

Applicative

Origial documentation for Builder.assign

def assign(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assign to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assign

def assign(ref, value, validate_shape=None, use_locking=None, name=None)

Update 'ref' by assigning 'value' to it.

This operation outputs "ref" after the assignment is done. This makes it easier to chain operations that need to use the reset value.

Args: ref: A mutable Tensor. Should be from a Variable node. May be uninitialized. value: A Tensor. Must have the same type as ref. The value to be assigned to the variable. validate_shape: An optional bool. Defaults to True. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'. use_locking: An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been reset.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assign_add(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assign_add, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assign_add

Return

Applicative

Origial documentation for Builder.assign_add

def assign_add(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assign_add to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assign_add

def assign_add(ref, value, use_locking=None, name=None)

Update 'ref' by adding 'value' to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. value: A Tensor. Must have the same type as ref. The value to be added to the variable. use_locking: An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assign_add_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assign_add_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assign_add_layer

Return

Applicative

Origial documentation for Builder.assign_add_layer

def assign_add_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assign_add, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assign_add

def assign_add(ref, value, use_locking=None, name=None):

Update 'ref' by adding 'value' to it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. value: A Tensor. Must have the same type as ref. The value to be added to the variable. use_locking: An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assign_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assign_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assign_layer

Return

Applicative

Origial documentation for Builder.assign_layer

def assign_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assign, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assign

def assign(ref, value, validate_shape=None, use_locking=None, name=None):

Update 'ref' by assigning 'value' to it.

This operation outputs "ref" after the assignment is done. This makes it easier to chain operations that need to use the reset value.

Args: ref: A mutable Tensor. Should be from a Variable node. May be uninitialized. value: A Tensor. Must have the same type as ref. The value to be assigned to the variable. validate_shape: An optional bool. Defaults to True. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'. use_locking: An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been reset.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assign_sub(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assign_sub, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assign_sub

Return

Applicative

Origial documentation for Builder.assign_sub

def assign_sub(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.assign_sub to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.assign_sub

def assign_sub(ref, value, use_locking=None, name=None)

Update 'ref' by subtracting 'value' from it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. value: A Tensor. Must have the same type as ref. The value to be subtracted to the variable. use_locking: An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def assign_sub_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.assign_sub_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.assign_sub_layer

Return

Applicative

Origial documentation for Builder.assign_sub_layer

def assign_sub_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.assign_sub, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.assign_sub

def assign_sub(ref, value, use_locking=None, name=None):

Update 'ref' by subtracting 'value' from it.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. value: A Tensor. Must have the same type as ref. The value to be subtracted to the variable. use_locking: An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def atan(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.atan, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.atan

Return

Applicative

Origial documentation for Builder.atan

def atan(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.atan to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.atan

def atan(x, name=None)

Computes atan of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def atan_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.atan_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.atan_layer

Return

Applicative

Origial documentation for Builder.atan_layer

def atan_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.atan, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.atan

def atan(x, name=None):

Computes atan of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def atrous_conv2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.atrous_conv2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.atrous_conv2d

Return

Applicative

Origial documentation for Builder.atrous_conv2d

def atrous_conv2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.atrous_conv2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.atrous_conv2d

def atrous_conv2d(value, filters, rate, padding, name=None)

Atrous convolution (a.k.a. convolution with holes or dilated convolution).

Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D value and filters tensors. If the rate parameter is equal to one, it performs regular 2-D convolution. If the rate parameter is greater than one, it performs convolution with holes, sampling the input values every rate pixels in the height and width dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting rate - 1 zeros between two consecutive values of the filters along the height and width dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).

More specifically:

output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] *
      value[b, i + rate * di, j + rate * dj, q]

Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to conv2d_transpose in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.

For a description of atrous convolution and how it can be used for dense feature extraction, please see: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. The same operation is investigated further in Multi-Scale Context Aggregation by Dilated Convolutions. Previous works that effectively use atrous convolution in different ways are, among others, OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks and [Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks] (http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing.

There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces

atrous_conv2d(value, filters, rate, padding=padding)

to the following three operations:

paddings = ...
net = space_to_batch(value, paddings, block_size=rate)
net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID")
crops = ...
net = batch_to_space(net, crops, block_size=rate)

Advanced usage. Note the following optimization: A sequence of atrous_conv2d operations with identical rate parameters, 'SAME' padding, and filters with odd heights/ widths:

net = atrous_conv2d(net, filters1, rate, padding="SAME")
net = atrous_conv2d(net, filters2, rate, padding="SAME")
...
net = atrous_conv2d(net, filtersK, rate, padding="SAME")

can be equivalently performed cheaper in terms of computation and memory as:

pad = ...  # padding so that the input dims are multiples of rate
net = space_to_batch(net, paddings=pad, block_size=rate)
net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME")
net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME")
...
net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME")
net = batch_to_space(net, crops=pad, block_size=rate)

because a pair of consecutive space_to_batch and batch_to_space ops with the same block_size cancel out when their respective paddings and crops inputs are identical.

Args: value: A 4-D Tensor of type float. It needs to be in the default "NHWC" format. Its shape is [batch, in_height, in_width, in_channels]. filters: A 4-D Tensor with the same type as value and shape [filter_height, filter_width, in_channels, out_channels]. filters' in_channels dimension must match that of value. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height filter_height + (filter_height - 1) * (rate - 1) and effective width filter_width + (filter_width - 1) * (rate - 1), produced by inserting rate - 1 zeros along consecutive elements across the filters' spatial dimensions. rate: A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride or dilation. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. name: Optional name for the returned tensor.

Returns: A Tensor with the same type as value.

Raises: ValueError: If input/output depth does not match filters' shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def atrous_conv2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.atrous_conv2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.atrous_conv2d_layer

Return

Applicative

Origial documentation for Builder.atrous_conv2d_layer

def atrous_conv2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.atrous_conv2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.atrous_conv2d

def atrous_conv2d(value, filters, rate, padding, name=None):

Atrous convolution (a.k.a. convolution with holes or dilated convolution).

Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D value and filters tensors. If the rate parameter is equal to one, it performs regular 2-D convolution. If the rate parameter is greater than one, it performs convolution with holes, sampling the input values every rate pixels in the height and width dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting rate - 1 zeros between two consecutive values of the filters along the height and width dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).

More specifically:

output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] *
      value[b, i + rate * di, j + rate * dj, q]

Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to conv2d_transpose in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.

For a description of atrous convolution and how it can be used for dense feature extraction, please see: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. The same operation is investigated further in Multi-Scale Context Aggregation by Dilated Convolutions. Previous works that effectively use atrous convolution in different ways are, among others, OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks and [Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks] (http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing.

There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces

atrous_conv2d(value, filters, rate, padding=padding)

to the following three operations:

paddings = ...
net = space_to_batch(value, paddings, block_size=rate)
net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID")
crops = ...
net = batch_to_space(net, crops, block_size=rate)

Advanced usage. Note the following optimization: A sequence of atrous_conv2d operations with identical rate parameters, 'SAME' padding, and filters with odd heights/ widths:

net = atrous_conv2d(net, filters1, rate, padding="SAME")
net = atrous_conv2d(net, filters2, rate, padding="SAME")
...
net = atrous_conv2d(net, filtersK, rate, padding="SAME")

can be equivalently performed cheaper in terms of computation and memory as:

pad = ...  # padding so that the input dims are multiples of rate
net = space_to_batch(net, paddings=pad, block_size=rate)
net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME")
net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME")
...
net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME")
net = batch_to_space(net, crops=pad, block_size=rate)

because a pair of consecutive space_to_batch and batch_to_space ops with the same block_size cancel out when their respective paddings and crops inputs are identical.

Args: value: A 4-D Tensor of type float. It needs to be in the default "NHWC" format. Its shape is [batch, in_height, in_width, in_channels]. filters: A 4-D Tensor with the same type as value and shape [filter_height, filter_width, in_channels, out_channels]. filters' in_channels dimension must match that of value. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height filter_height + (filter_height - 1) * (rate - 1) and effective width filter_width + (filter_width - 1) * (rate - 1), produced by inserting rate - 1 zeros along consecutive elements across the filters' spatial dimensions. rate: A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride or dilation. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. name: Optional name for the returned tensor.

Returns: A Tensor with the same type as value.

Raises: ValueError: If input/output depth does not match filters' shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def audio_summary(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.audio_summary, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.audio_summary

Return

Applicative

Origial documentation for Builder.audio_summary

def audio_summary(builder, tag):

THIS METHOD IS AUTOMATICALLY GENERATED

Same as tf.audio_summary(tag, tensor, sample_rate, max_outputs=3, collections=None, name=None) but the the with the summery tensor as its first parameter.

Return

Builder

Origial documentation for tf.audio_summary

def audio_summary(tag, tensor, sample_rate, max_outputs=3, collections=None, name=None):

Outputs a Summary protocol buffer with audio.

The summary has up to max_outputs summary values containing audio. The audio is built from tensor which must be 3-D with shape [batch_size, frames, channels] or 2-D with shape [batch_size, frames]. The values are assumed to be in the range of [-1.0, 1.0] with a sample rate of sample_rate.

The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values:

  • If max_outputs is 1, the summary value tag is 'tag/audio'.
  • If max_outputs is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc.

Args: tag: A scalar Tensor of type string. Used to build the tag of the summary values. tensor: A 3-D float32 Tensor of shape [batch_size, frames, channels] or a 2-D float32 Tensor of shape [batch_size, frames]. sample_rate: The sample rate of the signal in hertz. max_outputs: Max number of batch elements to generate audio for. collections: Optional list of ops.GraphKeys. The collections to add the summary to. Defaults to [ops.GraphKeys.SUMMARIES] name: A name for the operation (optional).

Returns: A scalar Tensor of type string. The serialized Summary protocol buffer.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def avg_pool(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.avg_pool, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.avg_pool

Return

Applicative

Origial documentation for Builder.avg_pool

def avg_pool(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.avg_pool to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.avg_pool

def avg_pool(value, ksize, strides, padding, data_format="NHWC", name=None)

Performs the average pooling on the input.

Each entry in output is the mean of the corresponding size ksize window in value.

Args: value: A 4-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here data_format: A string. 'NHWC' and 'NCHW' are supported. name: Optional name for the operation.

Returns: A Tensor with the same type as value. The average pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def avg_pool3d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.avg_pool3d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.avg_pool3d

Return

Applicative

Origial documentation for Builder.avg_pool3d

def avg_pool3d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.avg_pool3d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.avg_pool3d

def avg_pool3d(input, ksize, strides, padding, name=None)

Performs 3D average pooling on the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, channels] tensor to pool over. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The average pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def avg_pool3d_grad(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.avg_pool3d_grad, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.avg_pool3d_grad

Return

Applicative

Origial documentation for Builder.avg_pool3d_grad

def avg_pool3d_grad(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.avg_pool3d_grad to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.avg_pool3d_grad

def avg_pool3d_grad(orig_input_shape, grad, ksize, strides, padding, name=None)

Computes gradients of average pooling function.

Args: orig_input_shape: A Tensor of type int32. The original input dimensions. grad: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Output backprop of shape [batch, depth, rows, cols, channels]. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as grad. The backprop for input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def avg_pool3d_grad_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.avg_pool3d_grad_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.avg_pool3d_grad_layer

Return

Applicative

Origial documentation for Builder.avg_pool3d_grad_layer

def avg_pool3d_grad_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.avg_pool3d_grad, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.avg_pool3d_grad

def avg_pool3d_grad(orig_input_shape, grad, ksize, strides, padding, name=None):

Computes gradients of average pooling function.

Args: orig_input_shape: A Tensor of type int32. The original input dimensions. grad: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Output backprop of shape [batch, depth, rows, cols, channels]. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as grad. The backprop for input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def avg_pool3d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.avg_pool3d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.avg_pool3d_layer

Return

Applicative

Origial documentation for Builder.avg_pool3d_layer

def avg_pool3d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.avg_pool3d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.avg_pool3d

def avg_pool3d(input, ksize, strides, padding, name=None):

Performs 3D average pooling on the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, channels] tensor to pool over. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The average pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def avg_pool_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.avg_pool_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.avg_pool_layer

Return

Applicative

Origial documentation for Builder.avg_pool_layer

def avg_pool_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.avg_pool, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.avg_pool

def avg_pool(value, ksize, strides, padding, data_format="NHWC", name=None):

Performs the average pooling on the input.

Each entry in output is the mean of the corresponding size ksize window in value.

Args: value: A 4-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here data_format: A string. 'NHWC' and 'NCHW' are supported. name: Optional name for the operation.

Returns: A Tensor with the same type as value. The average pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_matmul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_matmul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_matmul_layer

Return

Applicative

Origial documentation for Builder.batch_matmul_layer

def batch_matmul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.batch_matmul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.batch_matmul

def _batch_mat_mul(x, y, adj_x=None, adj_y=None, name=None):

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 3-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 3-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, complex64, complex128. 3-D or higher with shape [..., r_x, c_x]. y: A Tensor. Must have the same type as x. 3-D or higher with shape [..., r_y, c_y]. adj_x: An optional bool. Defaults to False. If True, adjoint the slices of x. Defaults to False. adj_y: An optional bool. Defaults to False. If True, adjoint the slices of y. Defaults to False. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x. 3-D or higher with shape [..., r_o, c_o]

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_norm_with_global_normalization(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_norm_with_global_normalization, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_norm_with_global_normalization

Return

Applicative

Origial documentation for Builder.batch_norm_with_global_normalization

def batch_norm_with_global_normalization(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.batch_norm_with_global_normalization to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.batch_norm_with_global_normalization

def batch_norm_with_global_normalization(t, m, v, beta, gamma, variance_epsilon, scale_after_normalization, name=None)

Batch normalization.

This op is deprecated. See tf.nn.batch_normalization.

Args: t: A 4D input Tensor. m: A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof. v: A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof. beta: A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. gamma: A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor. variance_epsilon: A small float number to avoid dividing by 0. scale_after_normalization: A bool indicating whether the resulted tensor needs to be multiplied with gamma. name: A name for this operation (optional).

Returns: A batch-normalized t.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_norm_with_global_normalization_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_norm_with_global_normalization_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_norm_with_global_normalization_layer

Return

Applicative

Origial documentation for Builder.batch_norm_with_global_normalization_layer

def batch_norm_with_global_normalization_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.batch_norm_with_global_normalization, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.batch_norm_with_global_normalization

def batch_norm_with_global_normalization(t, m, v, beta, gamma, variance_epsilon, scale_after_normalization, name=None):

Batch normalization.

This op is deprecated. See tf.nn.batch_normalization.

Args: t: A 4D input Tensor. m: A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof. v: A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof. beta: A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. gamma: A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor. variance_epsilon: A small float number to avoid dividing by 0. scale_after_normalization: A bool indicating whether the resulted tensor needs to be multiplied with gamma. name: A name for this operation (optional).

Returns: A batch-normalized t.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_normalization(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_normalization, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_normalization

Return

Applicative

Origial documentation for Builder.batch_normalization

def batch_normalization(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.batch_normalization to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.batch_normalization

def batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)

Batch normalization.

As described in http://arxiv.org/abs/1502.03167. Normalizes a tensor by mean and variance, and applies (optionally) a scale \\(\gamma\\) to it, as well as an offset \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

mean, variance, offset and scale are all expected to be of one of two shapes: * In all generality, they can have the same number of dimensions as the input x, with identical sizes as x for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. mean and variance in this case would typically be the outputs of tf.nn.moments(..., keep_dims=True) during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor x, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common [batch, depth] layout of fully-connected layers, and [batch, height, width, depth] for convolutions. mean and variance in this case would typically be the outputs of tf.nn.moments(..., keep_dims=False) during training, or running averages thereof during inference.

Args: x: Input Tensor of arbitrary dimensionality. mean: A mean Tensor. variance: A variance Tensor. offset: An offset Tensor, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor. scale: A scale Tensor, often denoted \\(\gamma\\) in equations, or None. If present, the scale is applied to the normalized tensor. variance_epsilon: A small float number to avoid dividing by 0. name: A name for this operation (optional).

Returns: the normalized, scaled, offset tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_normalization_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_normalization_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_normalization_layer

Return

Applicative

Origial documentation for Builder.batch_normalization_layer

def batch_normalization_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.batch_normalization, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.batch_normalization

def batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None):

Batch normalization.

As described in http://arxiv.org/abs/1502.03167. Normalizes a tensor by mean and variance, and applies (optionally) a scale \\(\gamma\\) to it, as well as an offset \\(\beta\\):

\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)

mean, variance, offset and scale are all expected to be of one of two shapes: * In all generality, they can have the same number of dimensions as the input x, with identical sizes as x for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. mean and variance in this case would typically be the outputs of tf.nn.moments(..., keep_dims=True) during training, or running averages thereof during inference. * In the common case where the 'depth' dimension is the last dimension in the input tensor x, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common [batch, depth] layout of fully-connected layers, and [batch, height, width, depth] for convolutions. mean and variance in this case would typically be the outputs of tf.nn.moments(..., keep_dims=False) during training, or running averages thereof during inference.

Args: x: Input Tensor of arbitrary dimensionality. mean: A mean Tensor. variance: A variance Tensor. offset: An offset Tensor, often denoted \\(\beta\\) in equations, or None. If present, will be added to the normalized tensor. scale: A scale Tensor, often denoted \\(\gamma\\) in equations, or None. If present, the scale is applied to the normalized tensor. variance_epsilon: A small float number to avoid dividing by 0. name: A name for this operation (optional).

Returns: the normalized, scaled, offset tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_to_space(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_to_space, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_to_space

Return

Applicative

Origial documentation for Builder.batch_to_space

def batch_to_space(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.batch_to_space to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.batch_to_space

def batch_to_space(input, crops, block_size, name=None)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the batch dimension are moved in spatial blocks to the height and width dimensions, followed by cropping along the height and width dimensions.

Args: input: A Tensor. 4-D tensor with shape [batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth]. Note that the batch size of the input tensor must be divisible by block_size * block_size. crops: A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows:

    crops = [[crop_top, crop_bottom], [crop_left, crop_right]]

block_size: An int that is >= 2. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [batch, height, width, depth], where:

    height = height_pad - crop_top - crop_bottom
    width = width_pad - crop_left - crop_right

The attr block_size must be greater than one. It indicates the block size.

Some examples:

(1) For the following input of shape [4, 1, 1, 1] and block_size of 2:

prettyprint [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]

The output tensor has shape [1, 2, 2, 1] and value:

prettyprint x = [[[[1], [2]], [[3], [4]]]]

(2) For the following input of shape [4, 1, 1, 3] and block_size of 2:

prettyprint [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]

The output tensor has shape [1, 2, 2, 3] and value:

prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]

(3) For the following input of shape [4, 2, 2, 1] and block_size of 2:

prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

The output tensor has shape [1, 4, 4, 1] and value:

prettyprint x = [[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]

(4) For the following input of shape [8, 1, 2, 1] and block_size of 2:

prettyprint x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]

The output tensor has shape [2, 2, 4, 1] and value:

prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_to_space_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_to_space_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_to_space_layer

Return

Applicative

Origial documentation for Builder.batch_to_space_layer

def batch_to_space_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.batch_to_space, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.batch_to_space

def batch_to_space(input, crops, block_size, name=None):

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the batch dimension are moved in spatial blocks to the height and width dimensions, followed by cropping along the height and width dimensions.

Args: input: A Tensor. 4-D tensor with shape [batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth]. Note that the batch size of the input tensor must be divisible by block_size * block_size. crops: A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows:

    crops = [[crop_top, crop_bottom], [crop_left, crop_right]]

block_size: An int that is >= 2. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [batch, height, width, depth], where:

    height = height_pad - crop_top - crop_bottom
    width = width_pad - crop_left - crop_right

The attr block_size must be greater than one. It indicates the block size.

Some examples:

(1) For the following input of shape [4, 1, 1, 1] and block_size of 2:

prettyprint [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]

The output tensor has shape [1, 2, 2, 1] and value:

prettyprint x = [[[[1], [2]], [[3], [4]]]]

(2) For the following input of shape [4, 1, 1, 3] and block_size of 2:

prettyprint [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]

The output tensor has shape [1, 2, 2, 3] and value:

prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]

(3) For the following input of shape [4, 2, 2, 1] and block_size of 2:

prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

The output tensor has shape [1, 4, 4, 1] and value:

prettyprint x = [[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]

(4) For the following input of shape [8, 1, 2, 1] and block_size of 2:

prettyprint x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]

The output tensor has shape [2, 2, 4, 1] and value:

prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_to_space_nd(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_to_space_nd, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_to_space_nd

Return

Applicative

Origial documentation for Builder.batch_to_space_nd

def batch_to_space_nd(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.batch_to_space_nd to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.batch_to_space_nd

def batch_to_space_nd(input, block_shape, crops, name=None)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.

Args: input: A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions. block_shape: A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1. crops: A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. crops[i] = [crop_start, crop_end] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1].

This operation is equivalent to the following steps:

1. Reshape `input` to `reshaped` of shape:
     [block_shape[0], ..., block_shape[M-1],
      batch / prod(block_shape),
      input_shape[1], ..., input_shape[N-1]]

2. Permute dimensions of `reshaped` to produce `permuted` of shape
     [batch / prod(block_shape),

      input_shape[1], block_shape[0],
      ...,
      input_shape[M], block_shape[M-1],

      input_shape[M+1], ..., input_shape[N-1]]

3. Reshape `permuted` to produce `reshaped_permuted` of shape
     [batch / prod(block_shape),

      input_shape[1] * block_shape[0],
      ...,
      input_shape[M] * block_shape[M-1],

      input_shape[M+1],
      ...,
      input_shape[N-1]]

4. Crop the start and end of dimensions `[1, ..., M]` of
   `reshaped_permuted` according to `crops` to produce the output of shape:
     [batch / prod(block_shape),

      input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],
      ...,
      input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],

      input_shape[M+1], ..., input_shape[N-1]]

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [0, 0]]`:

```prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```

The output tensor has shape `[1, 2, 2, 1]` and value:

```prettyprint
x = [[[[1], [2]], [[3], [4]]]]
```

(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [0, 0]]`:

```prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```

The output tensor has shape `[1, 2, 2, 3]` and value:

```prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]
```

(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1], [3]], [[5], [7]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]
```

The output tensor has shape `[1, 4, 4, 1]` and value:

```prettyprint
x = [[[1],   [2],  [3],  [4]],
     [[5],   [6],  [7],  [8]],
     [[9],  [10], [11],  [12]],
     [[13], [14], [15],  [16]]]
```

(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [2, 0]]`:

```prettyprint
x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
     [[[0], [2], [4]]], [[[0], [10], [12]]],
     [[[0], [5], [7]]], [[[0], [13], [15]]],
     [[[0], [6], [8]]], [[[0], [14], [16]]]]
```

The output tensor has shape `[2, 2, 4, 1]` and value:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def batch_to_space_nd_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.batch_to_space_nd_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.batch_to_space_nd_layer

Return

Applicative

Origial documentation for Builder.batch_to_space_nd_layer

def batch_to_space_nd_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.batch_to_space_nd, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.batch_to_space_nd

def batch_to_space_nd(input, block_shape, crops, name=None):

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.

Args: input: A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions. block_shape: A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1. crops: A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. crops[i] = [crop_start, crop_end] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1].

This operation is equivalent to the following steps:

1. Reshape `input` to `reshaped` of shape:
     [block_shape[0], ..., block_shape[M-1],
      batch / prod(block_shape),
      input_shape[1], ..., input_shape[N-1]]

2. Permute dimensions of `reshaped` to produce `permuted` of shape
     [batch / prod(block_shape),

      input_shape[1], block_shape[0],
      ...,
      input_shape[M], block_shape[M-1],

      input_shape[M+1], ..., input_shape[N-1]]

3. Reshape `permuted` to produce `reshaped_permuted` of shape
     [batch / prod(block_shape),

      input_shape[1] * block_shape[0],
      ...,
      input_shape[M] * block_shape[M-1],

      input_shape[M+1],
      ...,
      input_shape[N-1]]

4. Crop the start and end of dimensions `[1, ..., M]` of
   `reshaped_permuted` according to `crops` to produce the output of shape:
     [batch / prod(block_shape),

      input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],
      ...,
      input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],

      input_shape[M+1], ..., input_shape[N-1]]

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [0, 0]]`:

```prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```

The output tensor has shape `[1, 2, 2, 1]` and value:

```prettyprint
x = [[[[1], [2]], [[3], [4]]]]
```

(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [0, 0]]`:

```prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```

The output tensor has shape `[1, 2, 2, 3]` and value:

```prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]
```

(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1], [3]], [[5], [7]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]
```

The output tensor has shape `[1, 4, 4, 1]` and value:

```prettyprint
x = [[[1],   [2],  [3],  [4]],
     [[5],   [6],  [7],  [8]],
     [[9],  [10], [11],  [12]],
     [[13], [14], [15],  [16]]]
```

(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and
    `crops = [[0, 0], [2, 0]]`:

```prettyprint
x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
     [[[0], [2], [4]]], [[[0], [10], [12]]],
     [[[0], [5], [7]]], [[[0], [13], [15]]],
     [[[0], [6], [8]]], [[[0], [14], [16]]]]
```

The output tensor has shape `[2, 2, 4, 1]` and value:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def betainc(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.betainc, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.betainc

Return

Applicative

Origial documentation for Builder.betainc

def betainc(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.betainc to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.betainc

def betainc(a, b, x, name=None)

Compute the regularized incomplete beta integral \(I_x(a, b)\).

The regularized incomplete beta integral is defined as:

I_x(a, b) = \frac{B(x; a, b)}{B(a, b)} where

B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt

is the incomplete beta function and \(B(a, b)\) is the complete beta function.

Args: a: A Tensor. Must be one of the following types: float32, float64. b: A Tensor. Must have the same type as a. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def betainc_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.betainc_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.betainc_layer

Return

Applicative

Origial documentation for Builder.betainc_layer

def betainc_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.betainc, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.betainc

def betainc(a, b, x, name=None):

Compute the regularized incomplete beta integral \(I_x(a, b)\).

The regularized incomplete beta integral is defined as:

I_x(a, b) = \frac{B(x; a, b)}{B(a, b)} where

B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt

is the incomplete beta function and \(B(a, b)\) is the complete beta function.

Args: a: A Tensor. Must be one of the following types: float32, float64. b: A Tensor. Must have the same type as a. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bias_add(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bias_add, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bias_add

Return

Applicative

Origial documentation for Builder.bias_add

def bias_add(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.bias_add to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.bias_add

def bias_add(value, bias, data_format=None, name=None)

Adds bias to value.

This is (mostly) a special case of tf.add where bias is restricted to 1-D. Broadcasting is supported, so value may have any number of dimensions. Unlike tf.add, the type of bias is allowed to differ from value in the case where both types are quantized.

Args: value: A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. bias: A 1-D Tensor with size matching the last dimension of value. Must be the same type as value unless value is a quantized type, in which case a different quantized type may be used. data_format: A string. 'NHWC' and 'NCHW' are supported. name: A name for the operation (optional).

Returns: A Tensor with the same type as value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bias_add_grad(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bias_add_grad, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bias_add_grad

Return

Applicative

Origial documentation for Builder.bias_add_grad

def bias_add_grad(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.bias_add_grad to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.bias_add_grad

def bias_add_grad(out_backprop, data_format=None, name=None)

The backward operation for "BiasAdd" on the "bias" tensor.

It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.

Args: out_backprop: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Any number of dimensions. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the bias tensor will be added to the last dimension of the value tensor. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. The tensor will be added to "in_channels", the third-to-the-last dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as out_backprop. 1-D with size the feature dimension of out_backprop.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bias_add_grad_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bias_add_grad_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bias_add_grad_layer

Return

Applicative

Origial documentation for Builder.bias_add_grad_layer

def bias_add_grad_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.bias_add_grad, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.bias_add_grad

def bias_add_grad(out_backprop, data_format=None, name=None):

The backward operation for "BiasAdd" on the "bias" tensor.

It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.

Args: out_backprop: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Any number of dimensions. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the bias tensor will be added to the last dimension of the value tensor. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. The tensor will be added to "in_channels", the third-to-the-last dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as out_backprop. 1-D with size the feature dimension of out_backprop.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bias_add_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bias_add_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bias_add_layer

Return

Applicative

Origial documentation for Builder.bias_add_layer

def bias_add_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.bias_add, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.bias_add

def bias_add(value, bias, data_format=None, name=None):

Adds bias to value.

This is (mostly) a special case of tf.add where bias is restricted to 1-D. Broadcasting is supported, so value may have any number of dimensions. Unlike tf.add, the type of bias is allowed to differ from value in the case where both types are quantized.

Args: value: A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. bias: A 1-D Tensor with size matching the last dimension of value. Must be the same type as value unless value is a quantized type, in which case a different quantized type may be used. data_format: A string. 'NHWC' and 'NCHW' are supported. name: A name for the operation (optional).

Returns: A Tensor with the same type as value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bias_add_v1(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bias_add_v1, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bias_add_v1

Return

Applicative

Origial documentation for Builder.bias_add_v1

def bias_add_v1(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.bias_add_v1 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.bias_add_v1

def bias_add_v1(value, bias, name=None)

Adds bias to value.

This is a deprecated version of bias_add and will soon to be removed.

This is (mostly) a special case of tf.add where bias is restricted to 1-D. Broadcasting is supported, so value may have any number of dimensions. Unlike tf.add, the type of bias is allowed to differ from value in the case where both types are quantized.

Args: value: A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. bias: A 1-D Tensor with size matching the last dimension of value. Must be the same type as value unless value is a quantized type, in which case a different quantized type may be used. name: A name for the operation (optional).

Returns: A Tensor with the same type as value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bias_add_v1_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bias_add_v1_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bias_add_v1_layer

Return

Applicative

Origial documentation for Builder.bias_add_v1_layer

def bias_add_v1_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.bias_add_v1, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.bias_add_v1

def bias_add_v1(value, bias, name=None):

Adds bias to value.

This is a deprecated version of bias_add and will soon to be removed.

This is (mostly) a special case of tf.add where bias is restricted to 1-D. Broadcasting is supported, so value may have any number of dimensions. Unlike tf.add, the type of bias is allowed to differ from value in the case where both types are quantized.

Args: value: A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. bias: A 1-D Tensor with size matching the last dimension of value. Must be the same type as value unless value is a quantized type, in which case a different quantized type may be used. name: A name for the operation (optional).

Returns: A Tensor with the same type as value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bidirectional_dynamic_rnn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bidirectional_dynamic_rnn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bidirectional_dynamic_rnn

Return

Applicative

Origial documentation for Builder.bidirectional_dynamic_rnn

def bidirectional_dynamic_rnn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.bidirectional_dynamic_rnn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.bidirectional_dynamic_rnn

def bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs, sequence_length=None, initial_state_fw=None, initial_state_bw=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)

Creates a dynamic version of bidirectional recurrent neural network.

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.

Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: The RNN inputs. If time_major == False (default), this must be a tensor of shape: [batch_size, max_time, input_size]. If time_major == True, this must be a tensor of shape: [max_time, batch_size, input_size]. [batch_size, input_size]. sequence_length: An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size. initial_state_bw: (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw. dtype: (optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. dtype: (optional) The data type for the initial state. Required if initial_state is not provided. sequence_length: An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. either of the initial states are not provided. scope: VariableScope for the created subgraph; defaults to "BiRNN"

Returns: A tuple (outputs, output_states) where: outputs: A tuple (output_fw, output_bw) containing the forward and the backward rnn output Tensor. If time_major == False (default), output_fw will be a Tensor shaped: [batch_size, max_time, cell_fw.output_size] and output_bw will be a Tensor shaped: [batch_size, max_time, cell_bw.output_size]. If time_major == True, output_fw will be a Tensor shaped: [max_time, batch_size, cell_fw.output_size] and output_bw will be a Tensor shaped: [max_time, batch_size, cell_bw.output_size]. It returns a tuple instead of a single concatenated Tensor, unlike in the bidirectional_rnn. If the concatenated one is preferred, the forward and backward outputs can be concatenated as tf.concat(2, outputs). output_states: A tuple (output_state_fw, output_state_bw) containing the forward and the backward final states of bidirectional rnn.

Raises: TypeError: If cell_fw or cell_bw is not an instance of RNNCell.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bidirectional_dynamic_rnn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bidirectional_dynamic_rnn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bidirectional_dynamic_rnn_layer

Return

Applicative

Origial documentation for Builder.bidirectional_dynamic_rnn_layer

def bidirectional_dynamic_rnn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.bidirectional_dynamic_rnn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.bidirectional_dynamic_rnn

def bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs, sequence_length=None, initial_state_fw=None, initial_state_bw=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None):

Creates a dynamic version of bidirectional recurrent neural network.

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.

Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: The RNN inputs. If time_major == False (default), this must be a tensor of shape: [batch_size, max_time, input_size]. If time_major == True, this must be a tensor of shape: [max_time, batch_size, input_size]. [batch_size, input_size]. sequence_length: An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size. initial_state_bw: (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw. dtype: (optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. dtype: (optional) The data type for the initial state. Required if initial_state is not provided. sequence_length: An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. either of the initial states are not provided. scope: VariableScope for the created subgraph; defaults to "BiRNN"

Returns: A tuple (outputs, output_states) where: outputs: A tuple (output_fw, output_bw) containing the forward and the backward rnn output Tensor. If time_major == False (default), output_fw will be a Tensor shaped: [batch_size, max_time, cell_fw.output_size] and output_bw will be a Tensor shaped: [batch_size, max_time, cell_bw.output_size]. If time_major == True, output_fw will be a Tensor shaped: [max_time, batch_size, cell_fw.output_size] and output_bw will be a Tensor shaped: [max_time, batch_size, cell_bw.output_size]. It returns a tuple instead of a single concatenated Tensor, unlike in the bidirectional_rnn. If the concatenated one is preferred, the forward and backward outputs can be concatenated as tf.concat(2, outputs). output_states: A tuple (output_state_fw, output_state_bw) containing the forward and the backward final states of bidirectional rnn.

Raises: TypeError: If cell_fw or cell_bw is not an instance of RNNCell.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bidirectional_rnn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bidirectional_rnn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bidirectional_rnn

Return

Applicative

Origial documentation for Builder.bidirectional_rnn

def bidirectional_rnn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.bidirectional_rnn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.bidirectional_rnn

def bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None)

Creates a bidirectional recurrent neural network.

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.

Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size. initial_state_bw: (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw. dtype: (optional) The data type for the initial state. Required if either of the initial states are not provided. sequence_length: (optional) An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. scope: VariableScope for the created subgraph; defaults to "BiRNN"

Returns: A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length T list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

Raises: TypeError: If cell_fw or cell_bw is not an instance of RNNCell. ValueError: If inputs is None or an empty list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bidirectional_rnn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bidirectional_rnn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bidirectional_rnn_layer

Return

Applicative

Origial documentation for Builder.bidirectional_rnn_layer

def bidirectional_rnn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.bidirectional_rnn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.bidirectional_rnn

def bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None):

Creates a bidirectional recurrent neural network.

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.

Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size. initial_state_bw: (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw. dtype: (optional) The data type for the initial state. Required if either of the initial states are not provided. sequence_length: (optional) An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. scope: VariableScope for the created subgraph; defaults to "BiRNN"

Returns: A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length T list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

Raises: TypeError: If cell_fw or cell_bw is not an instance of RNNCell. ValueError: If inputs is None or an empty list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bitcast(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bitcast, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bitcast

Return

Applicative

Origial documentation for Builder.bitcast

def bitcast(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.bitcast to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.bitcast

def bitcast(input, type, name=None)

Bitcasts a tensor from one type to another without copying data.

Given a tensor input, this operation returns a tensor that has the same buffer data as input with datatype type.

If the input datatype T is larger than the output datatype type then the shape changes from [...] to [..., sizeof(T)/sizeof(type)].

If T is smaller than type, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from [..., sizeof(type)/sizeof(T)] to [...].

NOTE: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. type: A tf.DType from: tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half. name: A name for the operation (optional).

Returns: A Tensor of type type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def bitcast_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.bitcast_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.bitcast_layer

Return

Applicative

Origial documentation for Builder.bitcast_layer

def bitcast_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.bitcast, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.bitcast

def bitcast(input, type, name=None):

Bitcasts a tensor from one type to another without copying data.

Given a tensor input, this operation returns a tensor that has the same buffer data as input with datatype type.

If the input datatype T is larger than the output datatype type then the shape changes from [...] to [..., sizeof(T)/sizeof(type)].

If T is smaller than type, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from [..., sizeof(type)/sizeof(T)] to [...].

NOTE: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. type: A tf.DType from: tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half. name: A name for the operation (optional).

Returns: A Tensor of type type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def boolean_mask(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.boolean_mask, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.boolean_mask

Return

Applicative

Origial documentation for Builder.boolean_mask

def boolean_mask(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.boolean_mask to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.boolean_mask

def boolean_mask(tensor, mask, name="boolean_mask")

Apply boolean mask to tensor. Numpy equivalent is tensor[mask].

```python

1-D example

tensor = [0, 1, 2, 3] mask = [True, False, True, False] boolean_mask(tensor, mask) ==> [0, 2] ```

In general, 0 < dim(mask) = K <= dim(tensor), and mask's shape must match the first K dimensions of tensor's shape. We then have: boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd] where (i1,...,iK) is the ith True entry of mask (row-major order).

Args: tensor: N-D tensor. mask: K-D boolean tensor, K <= N and K must be known statically. name: A name for this operation (optional).

Returns: Tensor populated by entries in tensor corresponding to True values in mask.

Raises: ValueError: If shapes do not conform.

Examples:

```python

2-D example

tensor = [[1, 2], [3, 4], [5, 6]] mask = [True, False, True] boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]] ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def boolean_mask_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.boolean_mask_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.boolean_mask_layer

Return

Applicative

Origial documentation for Builder.boolean_mask_layer

def boolean_mask_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.boolean_mask, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.boolean_mask

def boolean_mask(tensor, mask, name="boolean_mask"):

Apply boolean mask to tensor. Numpy equivalent is tensor[mask].

```python

1-D example

tensor = [0, 1, 2, 3] mask = [True, False, True, False] boolean_mask(tensor, mask) ==> [0, 2] ```

In general, 0 < dim(mask) = K <= dim(tensor), and mask's shape must match the first K dimensions of tensor's shape. We then have: boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd] where (i1,...,iK) is the ith True entry of mask (row-major order).

Args: tensor: N-D tensor. mask: K-D boolean tensor, K <= N and K must be known statically. name: A name for this operation (optional).

Returns: Tensor populated by entries in tensor corresponding to True values in mask.

Raises: ValueError: If shapes do not conform.

Examples:

```python

2-D example

tensor = [[1, 2], [3, 4], [5, 6]] mask = [True, False, True] boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]] ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def branch(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.branch, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.branch

Return

Applicative

Origial documentation for Builder.branch

def branch(builder, fn):

@immutable

Expects a function fn with type Builder -> iterable( Builder | BuilderTree ). This method enables you to branch the computational graph so you can easily create neural networks with more complex topologies.

Parameters

  • fn: a function of type Builder -> iterable( Builder | BuilderTree ).

Return

  • tensorbuilder.core.builders.BuilderTree

Examples

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = (
    tb.build(x)
    .branch(lambda x: [
        x.relu_layer(20)
    ,
        x.sigmoid_layer(20)
    ,
        x.tanh_layer(20)
    ])
    .softmax_layer(5)
    .tensor()
)

Same with the DSL

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = tb.pipe(
    x,
    [
        tb.relu_layer(20)
    ,
        tb.sigmoid_layer(20)
    ,
        tb.tanh_layer(20)
    ],
    tb.softmax_layer(5)
    .tensor()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def builders(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.builders, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.builders

Return

Applicative

Origial documentation for BuilderTree.builders

def builders(self):

Returns a flattened list tensorbuilder.core.builders.Builders contained by this tree. The whole result is flattened in case of sub-elements are also tensorbuilder.core.builders.BuilderTrees.

Return

  • list( tensorbuilder.core.builders.Builder )

Examples

This examples creates a network to that solves the XOR problem using sigmoid units

import tensorflow as tf
from tensorbuilder import tb

x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.float32, shape=[None, 1])


#Network
[activation_builder, trainer_builder] = (
    tb.build(x)

    .sigmoid_layer(2)
    .linear_layer(1)

    .branch(lambda logit:
    [
        logit.sigmoid() # activation
    ,
        logit
        .sigmoid_cross_entropy_with_logits(y) # loss
        .map(tf.train.AdamOptimizer(0.01).minimize) # trainer
    ])
    .builders()
)

Same example using the DSL

import tensorflow as tf
from tensorbuilder import tb

x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.float32, shape=[None, 1])


#Network
[activation_builder, trainer_builder] = tb.pipe(
    x,
    tb.sigmoid_layer(2)
    .linear_layer(1),
    [
        tb.sigmoid() # activation
    ,
        tb
        .sigmoid_cross_entropy_with_logits(y) # loss
        .map(tf.train.AdamOptimizer(0.01).minimize) # trainer
    ],
    tb.builders()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def case(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.case, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.case

Return

Applicative

Origial documentation for Builder.case

def case(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.case to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.case

def case(pred_fn_pairs, default, exclusive=False, name="case")

Create a case operation.

The pred_fn_pairs parameter is a dict or list of pairs of size N. Each pair contains a boolean scalar tensor and a python callable that creates the tensors to be returned if the boolean evaluates to True. default is a callable generating a list of tensors. All the callables in pred_fn_pairs as well as default should return the same number and types of tensors.

If exclusive==True, all predicates are evaluated, and a logging operation with an error is returned if more than one of the predicates evaluates to True. If exclusive==False, execution stops are the first predicate which evaluates to True, and the tensors generated by the corresponding function are returned immediately. If none of the predicates evaluate to True, this operation returns the tensors generated by default.

Example 1: Pseudocode: if (x < y) return 17; else return 23;

Expressions: f1 = lambda: tf.constant(17) f2 = lambda: tf.constant(23) r = case([(tf.less(x, y), f1)], default=f2)

Example 2: Pseudocode: if (x < y && x > z) raise OpError("Only one predicate may evaluate true"); if (x < y) return 17; else if (x > z) return 23; else return -1;

Expressions: x = tf.constant(0) y = tf.constant(1) z = tf.constant(2) def f1(): return tf.constant(17) def f2(): return tf.constant(23) def f3(): return tf.constant(-1) r = case({tf.less(x, y): f1, tf.greater(x, z): f2}, default=f3, exclusive=True)

Args: pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a callable which returns a list of tensors. default: A callable that returns a list of tensors. exclusive: True iff more than one predicate is allowed to evaluate to True. name: A name for this operation (optional).

Returns: The tensors returned by the first pair whose predicate evaluated to True, or those returned by default if none does.

Raises: TypeError: If pred_fn_pairs is not a list/dictionary. TypeError: If pred_fn_pairs is a list but does not contain 2-tuples. TypeError: If fns[i] is not callable for any i, or default is not callable.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def case_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.case_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.case_layer

Return

Applicative

Origial documentation for Builder.case_layer

def case_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.case, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.case

def case(pred_fn_pairs, default, exclusive=False, name="case"):

Create a case operation.

The pred_fn_pairs parameter is a dict or list of pairs of size N. Each pair contains a boolean scalar tensor and a python callable that creates the tensors to be returned if the boolean evaluates to True. default is a callable generating a list of tensors. All the callables in pred_fn_pairs as well as default should return the same number and types of tensors.

If exclusive==True, all predicates are evaluated, and a logging operation with an error is returned if more than one of the predicates evaluates to True. If exclusive==False, execution stops are the first predicate which evaluates to True, and the tensors generated by the corresponding function are returned immediately. If none of the predicates evaluate to True, this operation returns the tensors generated by default.

Example 1: Pseudocode: if (x < y) return 17; else return 23;

Expressions: f1 = lambda: tf.constant(17) f2 = lambda: tf.constant(23) r = case([(tf.less(x, y), f1)], default=f2)

Example 2: Pseudocode: if (x < y && x > z) raise OpError("Only one predicate may evaluate true"); if (x < y) return 17; else if (x > z) return 23; else return -1;

Expressions: x = tf.constant(0) y = tf.constant(1) z = tf.constant(2) def f1(): return tf.constant(17) def f2(): return tf.constant(23) def f3(): return tf.constant(-1) r = case({tf.less(x, y): f1, tf.greater(x, z): f2}, default=f3, exclusive=True)

Args: pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a callable which returns a list of tensors. default: A callable that returns a list of tensors. exclusive: True iff more than one predicate is allowed to evaluate to True. name: A name for this operation (optional).

Returns: The tensors returned by the first pair whose predicate evaluated to True, or those returned by default if none does.

Raises: TypeError: If pred_fn_pairs is not a list/dictionary. TypeError: If pred_fn_pairs is a list but does not contain 2-tuples. TypeError: If fns[i] is not callable for any i, or default is not callable.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cast(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cast, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cast

Return

Applicative

Origial documentation for Builder.cast

def cast(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cast to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cast

def cast(x, dtype, name=None)

Casts a tensor to a new type.

The operation casts x (in case of Tensor) or x.values (in case of SparseTensor) to dtype.

For example:

```python

tensor a is [1.8, 2.2], dtype=tf.float

tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32 ```

Args: x: A Tensor or SparseTensor. dtype: The destination type. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x.

Raises: TypeError: If x cannot be cast to the dtype.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cast_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cast_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cast_layer

Return

Applicative

Origial documentation for Builder.cast_layer

def cast_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cast, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cast

def cast(x, dtype, name=None):

Casts a tensor to a new type.

The operation casts x (in case of Tensor) or x.values (in case of SparseTensor) to dtype.

For example:

```python

tensor a is [1.8, 2.2], dtype=tf.float

tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32 ```

Args: x: A Tensor or SparseTensor. dtype: The destination type. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x.

Raises: TypeError: If x cannot be cast to the dtype.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ceil(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ceil, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ceil

Return

Applicative

Origial documentation for Builder.ceil

def ceil(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ceil to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ceil

def ceil(x, name=None)

Returns element-wise smallest integer in not less than x.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ceil_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ceil_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ceil_layer

Return

Applicative

Origial documentation for Builder.ceil_layer

def ceil_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ceil, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ceil

def ceil(x, name=None):

Returns element-wise smallest integer in not less than x.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def check_numerics(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.check_numerics, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.check_numerics

Return

Applicative

Origial documentation for Builder.check_numerics

def check_numerics(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.check_numerics to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.check_numerics

def check_numerics(tensor, message, name=None)

Checks a tensor for NaN and Inf values.

When run, reports an InvalidArgument error if tensor has any values that are not a number (NaN) or infinity (Inf). Otherwise, passes tensor as-is.

Args: tensor: A Tensor. Must be one of the following types: half, float32, float64. message: A string. Prefix of the error message. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def check_numerics_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.check_numerics_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.check_numerics_layer

Return

Applicative

Origial documentation for Builder.check_numerics_layer

def check_numerics_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.check_numerics, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.check_numerics

def check_numerics(tensor, message, name=None):

Checks a tensor for NaN and Inf values.

When run, reports an InvalidArgument error if tensor has any values that are not a number (NaN) or infinity (Inf). Otherwise, passes tensor as-is.

Args: tensor: A Tensor. Must be one of the following types: half, float32, float64. message: A string. Prefix of the error message. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cholesky(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cholesky, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cholesky

Return

Applicative

Origial documentation for Builder.cholesky

def cholesky(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cholesky to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cholesky

def cholesky(input, name=None)

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices, with the same constraints as the single matrix Cholesky decomposition above. The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices [..., :, :].

Args: input: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Shape is [..., M, M].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cholesky_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cholesky_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cholesky_layer

Return

Applicative

Origial documentation for Builder.cholesky_layer

def cholesky_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cholesky, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cholesky

def cholesky(input, name=None):

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices, with the same constraints as the single matrix Cholesky decomposition above. The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices [..., :, :].

Args: input: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Shape is [..., M, M].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cholesky_solve(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cholesky_solve, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cholesky_solve

Return

Applicative

Origial documentation for Builder.cholesky_solve

def cholesky_solve(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cholesky_solve to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cholesky_solve

def cholesky_solve(chol, rhs, name=None)

Solves systems of linear eqns A X = RHS, given Cholesky factorizations.

```python

Solve 10 separate 2x2 linear systems:

A = ... # shape 10 x 2 x 2 RHS = ... # shape 10 x 2 x 1 chol = tf.cholesky(A) # shape 10 x 2 x 2 X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1

tf.matmul(A, X) ~ RHS

X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]

Solve five linear systems (K = 5) for every member of the length 10 batch.

A = ... # shape 10 x 2 x 2 RHS = ... # shape 10 x 2 x 5 ... X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2] ```

Args: chol: A Tensor. Must be float32 or float64, shape is [..., M, M]. Cholesky factorization of A, e.g. chol = tf.cholesky(A). For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions of chol are used. The strictly upper part is assumed to be zero and not accessed. rhs: A Tensor, same type as chol, shape is [..., M, K]. name: A name to give this Op. Defaults to cholesky_solve.

Returns: Solution to A x = rhs, shape [..., M, K].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cholesky_solve_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cholesky_solve_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cholesky_solve_layer

Return

Applicative

Origial documentation for Builder.cholesky_solve_layer

def cholesky_solve_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cholesky_solve, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cholesky_solve

def cholesky_solve(chol, rhs, name=None):

Solves systems of linear eqns A X = RHS, given Cholesky factorizations.

```python

Solve 10 separate 2x2 linear systems:

A = ... # shape 10 x 2 x 2 RHS = ... # shape 10 x 2 x 1 chol = tf.cholesky(A) # shape 10 x 2 x 2 X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1

tf.matmul(A, X) ~ RHS

X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]

Solve five linear systems (K = 5) for every member of the length 10 batch.

A = ... # shape 10 x 2 x 2 RHS = ... # shape 10 x 2 x 5 ... X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2] ```

Args: chol: A Tensor. Must be float32 or float64, shape is [..., M, M]. Cholesky factorization of A, e.g. chol = tf.cholesky(A). For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions of chol are used. The strictly upper part is assumed to be zero and not accessed. rhs: A Tensor, same type as chol, shape is [..., M, K]. name: A name to give this Op. Defaults to cholesky_solve.

Returns: Solution to A x = rhs, shape [..., M, K].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_average_norm(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_average_norm, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_average_norm

Return

Applicative

Origial documentation for Builder.clip_by_average_norm

def clip_by_average_norm(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.clip_by_average_norm to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.clip_by_average_norm

def clip_by_average_norm(t, clip_norm, name=None)

Clips tensor values to a maximum average L2-norm.

Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its average L2-norm is less than or equal to clip_norm. Specifically, if the average L2-norm is already less than or equal to clip_norm, then t is not modified. If the average L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to:

t * clip_norm / l2norm_avg(t)

In this case, the average L2-norm of the output tensor is clip_norm.

This operation is typically used to clip gradients before applying them with an optimizer.

Args: t: A Tensor. clip_norm: A 0-D (scalar) Tensor > 0. A maximum clipping value. name: A name for the operation (optional).

Returns: A clipped Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_average_norm_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_average_norm_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_average_norm_layer

Return

Applicative

Origial documentation for Builder.clip_by_average_norm_layer

def clip_by_average_norm_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.clip_by_average_norm, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.clip_by_average_norm

def clip_by_average_norm(t, clip_norm, name=None):

Clips tensor values to a maximum average L2-norm.

Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its average L2-norm is less than or equal to clip_norm. Specifically, if the average L2-norm is already less than or equal to clip_norm, then t is not modified. If the average L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to:

t * clip_norm / l2norm_avg(t)

In this case, the average L2-norm of the output tensor is clip_norm.

This operation is typically used to clip gradients before applying them with an optimizer.

Args: t: A Tensor. clip_norm: A 0-D (scalar) Tensor > 0. A maximum clipping value. name: A name for the operation (optional).

Returns: A clipped Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_global_norm(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_global_norm, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_global_norm

Return

Applicative

Origial documentation for Builder.clip_by_global_norm

def clip_by_global_norm(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.clip_by_global_norm to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.clip_by_global_norm

def clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)

Clips values of multiple tensors by the ratio of the sum of their norms.

Given a tuple or list of tensors t_list, and a clipping ratio clip_norm, this operation returns a list of clipped tensors list_clipped and the global norm (global_norm) of all tensors in t_list. Optionally, if you've already computed the global norm for t_list, you can specify the global norm with use_norm.

To perform the clipping, the values t_list[i] are set to:

t_list[i] * clip_norm / max(global_norm, clip_norm)

where:

global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))

If clip_norm > global_norm then the entries in t_list remain as they are, otherwise they're all shrunk by the global ratio.

Any of the entries of t_list that are of type None are ignored.

This is the correct way to perform gradient clipping (for example, see Pascanu et al., 2012 (pdf)).

However, it is slower than clip_by_norm() because all the parameters must be ready before the clipping operation can be performed.

Args: t_list: A tuple or list of mixed Tensors, IndexedSlices, or None. clip_norm: A 0-D (scalar) Tensor > 0. The clipping ratio. use_norm: A 0-D (scalar) Tensor of type float (optional). The global norm to use. If not provided, global_norm() is used to compute the norm. name: A name for the operation (optional).

Returns: list_clipped: A list of Tensors of the same type as list_t. global_norm: A 0-D (scalar) Tensor representing the global norm.

Raises: TypeError: If t_list is not a sequence.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_global_norm_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_global_norm_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_global_norm_layer

Return

Applicative

Origial documentation for Builder.clip_by_global_norm_layer

def clip_by_global_norm_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.clip_by_global_norm, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.clip_by_global_norm

def clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None):

Clips values of multiple tensors by the ratio of the sum of their norms.

Given a tuple or list of tensors t_list, and a clipping ratio clip_norm, this operation returns a list of clipped tensors list_clipped and the global norm (global_norm) of all tensors in t_list. Optionally, if you've already computed the global norm for t_list, you can specify the global norm with use_norm.

To perform the clipping, the values t_list[i] are set to:

t_list[i] * clip_norm / max(global_norm, clip_norm)

where:

global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))

If clip_norm > global_norm then the entries in t_list remain as they are, otherwise they're all shrunk by the global ratio.

Any of the entries of t_list that are of type None are ignored.

This is the correct way to perform gradient clipping (for example, see Pascanu et al., 2012 (pdf)).

However, it is slower than clip_by_norm() because all the parameters must be ready before the clipping operation can be performed.

Args: t_list: A tuple or list of mixed Tensors, IndexedSlices, or None. clip_norm: A 0-D (scalar) Tensor > 0. The clipping ratio. use_norm: A 0-D (scalar) Tensor of type float (optional). The global norm to use. If not provided, global_norm() is used to compute the norm. name: A name for the operation (optional).

Returns: list_clipped: A list of Tensors of the same type as list_t. global_norm: A 0-D (scalar) Tensor representing the global norm.

Raises: TypeError: If t_list is not a sequence.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_norm(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_norm, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_norm

Return

Applicative

Origial documentation for Builder.clip_by_norm

def clip_by_norm(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.clip_by_norm to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.clip_by_norm

def clip_by_norm(t, clip_norm, axes=None, name=None)

Clips tensor values to a maximum L2-norm.

Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its L2-norm is less than or equal to clip_norm, along the dimensions given in axes. Specifically, in the default case where all dimensions are used for calculation, if the L2-norm of t is already less than or equal to clip_norm, then t is not modified. If the L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to:

t * clip_norm / l2norm(t)

In this case, the L2-norm of the output tensor is clip_norm.

As another example, if t is a matrix and axes == [1], then each row of the output will have L2-norm equal to clip_norm. If axes == [0] instead, each column of the output will be clipped.

This operation is typically used to clip gradients before applying them with an optimizer.

Args: t: A Tensor. clip_norm: A 0-D (scalar) Tensor > 0. A maximum clipping value. axes: A 1-D (vector) Tensor of type int32 containing the dimensions to use for computing the L2-norm. If None (the default), uses all dimensions. name: A name for the operation (optional).

Returns: A clipped Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_norm_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_norm_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_norm_layer

Return

Applicative

Origial documentation for Builder.clip_by_norm_layer

def clip_by_norm_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.clip_by_norm, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.clip_by_norm

def clip_by_norm(t, clip_norm, axes=None, name=None):

Clips tensor values to a maximum L2-norm.

Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its L2-norm is less than or equal to clip_norm, along the dimensions given in axes. Specifically, in the default case where all dimensions are used for calculation, if the L2-norm of t is already less than or equal to clip_norm, then t is not modified. If the L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to:

t * clip_norm / l2norm(t)

In this case, the L2-norm of the output tensor is clip_norm.

As another example, if t is a matrix and axes == [1], then each row of the output will have L2-norm equal to clip_norm. If axes == [0] instead, each column of the output will be clipped.

This operation is typically used to clip gradients before applying them with an optimizer.

Args: t: A Tensor. clip_norm: A 0-D (scalar) Tensor > 0. A maximum clipping value. axes: A 1-D (vector) Tensor of type int32 containing the dimensions to use for computing the L2-norm. If None (the default), uses all dimensions. name: A name for the operation (optional).

Returns: A clipped Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_value(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_value, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_value

Return

Applicative

Origial documentation for Builder.clip_by_value

def clip_by_value(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.clip_by_value to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.clip_by_value

def clip_by_value(t, clip_value_min, clip_value_max, name=None)

Clips tensor values to a specified min and max.

Given a tensor t, this operation returns a tensor of the same type and shape as t with its values clipped to clip_value_min and clip_value_max. Any values less than clip_value_min are set to clip_value_min. Any values greater than clip_value_max are set to clip_value_max.

Args: t: A Tensor. clip_value_min: A 0-D (scalar) Tensor. The minimum value to clip by. clip_value_max: A 0-D (scalar) Tensor. The maximum value to clip by. name: A name for the operation (optional).

Returns: A clipped Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def clip_by_value_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.clip_by_value_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.clip_by_value_layer

Return

Applicative

Origial documentation for Builder.clip_by_value_layer

def clip_by_value_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.clip_by_value, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.clip_by_value

def clip_by_value(t, clip_value_min, clip_value_max, name=None):

Clips tensor values to a specified min and max.

Given a tensor t, this operation returns a tensor of the same type and shape as t with its values clipped to clip_value_min and clip_value_max. Any values less than clip_value_min are set to clip_value_min. Any values greater than clip_value_max are set to clip_value_max.

Args: t: A Tensor. clip_value_min: A 0-D (scalar) Tensor. The minimum value to clip by. clip_value_max: A 0-D (scalar) Tensor. The maximum value to clip by. name: A name for the operation (optional).

Returns: A clipped Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def compile(

self, *ast)

compile an object ast which must be part of the domain of the DSL and returns function. It applies the rules of the DSL to create an actual Python function that does what you intend. Normally you will just use pipe, which not only compiles the DSL it actually performs the computation to a given Tensor/Builder, however, it you are building and API this might be useful since you can create a function from an AST which can itself be used as an element of another AST since final elements of the DSL are functions.

Arguments

  • *ast: a sequence of elements of the DSL.

Return

A function

Examples

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

f = tb.compile(
    tb.build, #accept a Tensor as a parameter and create a builder so you can use the rest of the methods
    [
        { tf.device("/gpu:0"):
            tb.relu_layer(20)
        }
    ,
        { tf.device("/gpu:1"):
            tb.sigmoid_layer(20)
        }
    ,
        { tf.device("/cpu:0"):
            tb.tanh_layer(20)
        }
    ],
    tb.relu_layer(10)
    .tensor()
)

h = f(x)
def compile(self, *ast):
    """
    `compile` an object `ast` which must be part of the domain of the DSL and returns function. It applies the rules of the DSL to create an actual Python function that does what you intend. Normally you will just use pipe, which not only compiles the DSL it actually performs the computation to a given Tensor/Builder, however, it you are building and API this might be useful since you can create a function from an AST which can itself be used as an element of another AST since final elements of the DSL are functions.
    **Arguments**
    * `*ast`: a sequence of elements of the DSL.
    **Return**
    A function
    **Examples**
        import tensorflow as tf
        from tensorbuilder import tb
        x = placeholder(tf.float32, shape=[None, 10])
        f = tb.compile(
            tb.build, #accept a Tensor as a parameter and create a builder so you can use the rest of the methods
            [
                { tf.device("/gpu:0"):
                    tb.relu_layer(20)
                }
            ,
                { tf.device("/gpu:1"):
                    tb.sigmoid_layer(20)
                }
            ,
                { tf.device("/cpu:0"):
                    tb.tanh_layer(20)
                }
            ],
            tb.relu_layer(10)
            .tensor()
        )
        h = f(x)
    """
    return _compile(ast)

def complex(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.complex, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.complex

Return

Applicative

Origial documentation for Builder.complex

def complex(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.complex to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.complex

def complex(real, imag, name=None)

Converts two real numbers to a complex number.

Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form (a + bj), where a represents the real part and b represents the imag part.

The input tensors real and imag must have the same shape.

For example:

```

tensor 'real' is [2.25, 3.25]

tensor imag is [4.75, 5.75]

tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] ```

Args: real: A Tensor. Must be one of the following types: float32, float64. imag: A Tensor. Must have the same type as real. name: A name for the operation (optional).

Returns: A Tensor of type complex64 or complex128.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def complex_abs(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.complex_abs, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.complex_abs

Return

Applicative

Origial documentation for Builder.complex_abs

def complex_abs(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.complex_abs to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.complex_abs

def complex_abs(x, name=None)

Computes the complex absolute value of a tensor.

Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. All elements in x must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a^2 + b^2}\).

For example:

```

tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]

tf.complex_abs(x) ==> [5.25594902, 6.60492229] ```

Args: x: A Tensor of type complex64 or complex128. name: A name for the operation (optional).

Returns: A Tensor of type float32 or float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def complex_abs_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.complex_abs_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.complex_abs_layer

Return

Applicative

Origial documentation for Builder.complex_abs_layer

def complex_abs_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.complex_abs, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.complex_abs

def complex_abs(x, name=None):

Computes the complex absolute value of a tensor.

Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. All elements in x must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a^2 + b^2}\).

For example:

```

tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]

tf.complex_abs(x) ==> [5.25594902, 6.60492229] ```

Args: x: A Tensor of type complex64 or complex128. name: A name for the operation (optional).

Returns: A Tensor of type float32 or float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def complex_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.complex_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.complex_layer

Return

Applicative

Origial documentation for Builder.complex_layer

def complex_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.complex, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.complex

def complex(real, imag, name=None):

Converts two real numbers to a complex number.

Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form (a + bj), where a represents the real part and b represents the imag part.

The input tensors real and imag must have the same shape.

For example:

```

tensor 'real' is [2.25, 3.25]

tensor imag is [4.75, 5.75]

tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] ```

Args: real: A Tensor. Must be one of the following types: float32, float64. imag: A Tensor. Must have the same type as real. name: A name for the operation (optional).

Returns: A Tensor of type complex64 or complex128.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def compose(

app, g, *args, **kwargs)

Takes in a function g and composes it with tensorbuilder.core.Applicative.f as g o f. All *args and ** are forwarded to g. This is an essential method since most registered methods use this.

Arguments

  • g: A function
  • All *args and ** are forwarded to g

Return

Applicative

Examples

import tensorflow as tf
from tensorbuilder import tb
def compose(app, g, *args, **kwargs):
    """
    Takes in a function `g` and composes it with `tensorbuilder.core.Applicative.f` as `g o f`. All \*args and \*\* are forwarded to g. This is an essential method since most registered methods use this.
    **Arguments**
    * `g`: A function
    * All \*args and \*\* are forwarded to `g`
    **Return**
    Applicative
    **Examples**
        import tensorflow as tf
        from tensorbuilder import tb
    """
    return app._unit(lambda x: g(app.f(x), *args, **kwargs))

def compute_accidental_hits(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.compute_accidental_hits, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.compute_accidental_hits

Return

Applicative

Origial documentation for Builder.compute_accidental_hits

def compute_accidental_hits(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.compute_accidental_hits to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.compute_accidental_hits

def compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)

Compute the position ids in sampled_candidates matching true_classes.

In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.

See our Candidate Sampling Algorithms Reference.

We presuppose that the sampled_candidates are unique.

We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples (index, id, weight), where index represents the row number in true_classes, id represents the position in sampled_candidates, and weight is -FLOAT_MAX.

The result of this op should be passed through a sparse_to_dense operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled_candidates output of CandidateSampler. num_true: An int. The number of target classes per training example. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: indices: A Tensor of type int32 and shape [num_accidental_hits]. Values indicate rows in true_classes. ids: A Tensor of type int64 and shape [num_accidental_hits]. Values indicate positions in sampled_candidates. weights: A Tensor of type float and shape [num_accidental_hits]. Each value is -FLOAT_MAX.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def compute_accidental_hits_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.compute_accidental_hits_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.compute_accidental_hits_layer

Return

Applicative

Origial documentation for Builder.compute_accidental_hits_layer

def compute_accidental_hits_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.compute_accidental_hits, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.compute_accidental_hits

def compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None):

Compute the position ids in sampled_candidates matching true_classes.

In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.

See our Candidate Sampling Algorithms Reference.

We presuppose that the sampled_candidates are unique.

We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples (index, id, weight), where index represents the row number in true_classes, id represents the position in sampled_candidates, and weight is -FLOAT_MAX.

The result of this op should be passed through a sparse_to_dense operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled_candidates output of CandidateSampler. num_true: An int. The number of target classes per training example. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: indices: A Tensor of type int32 and shape [num_accidental_hits]. Values indicate rows in true_classes. ids: A Tensor of type int64 and shape [num_accidental_hits]. Values indicate positions in sampled_candidates. weights: A Tensor of type float and shape [num_accidental_hits]. Each value is -FLOAT_MAX.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def concat(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.concat, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.concat

Return

Applicative

Origial documentation for Builder.concat

def concat(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.concat to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.concat

def concat(concat_dim, values, name="concat")

Concatenates tensors along one dimension.

Concatenates the list of tensors values along dimension concat_dim. If values[i].shape = [D0, D1, ... Dconcat_dim(i), ...Dn], the concatenated result has shape

[D0, D1, ... Rconcat_dim, ...Dn]

where

Rconcat_dim = sum(Dconcat_dim(i))

That is, the data from the input tensors is joined along the concat_dim dimension.

The number of dimensions of the input tensors must match, and all dimensions except concat_dim must be equal.

For example:

```python t1 = [[1, 2, 3], [4, 5, 6]] t2 = [[7, 8, 9], [10, 11, 12]] tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]

tensor t3 with shape [2, 3]

tensor t4 with shape [2, 3]

tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3] tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6] ```

Note: If you are concatenating along a new axis consider using pack. E.g.

python tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors])

can be rewritten as

python tf.pack(tensors, axis=axis)

Args: concat_dim: 0-D int32 Tensor. Dimension along which to concatenate. values: A list of Tensor objects or a single Tensor. name: A name for the operation (optional).

Returns: A Tensor resulting from concatenation of the input tensors.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def concat_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.concat_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.concat_layer

Return

Applicative

Origial documentation for Builder.concat_layer

def concat_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.concat, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.concat

def concat(concat_dim, values, name="concat"):

Concatenates tensors along one dimension.

Concatenates the list of tensors values along dimension concat_dim. If values[i].shape = [D0, D1, ... Dconcat_dim(i), ...Dn], the concatenated result has shape

[D0, D1, ... Rconcat_dim, ...Dn]

where

Rconcat_dim = sum(Dconcat_dim(i))

That is, the data from the input tensors is joined along the concat_dim dimension.

The number of dimensions of the input tensors must match, and all dimensions except concat_dim must be equal.

For example:

```python t1 = [[1, 2, 3], [4, 5, 6]] t2 = [[7, 8, 9], [10, 11, 12]] tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]

tensor t3 with shape [2, 3]

tensor t4 with shape [2, 3]

tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3] tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6] ```

Note: If you are concatenating along a new axis consider using pack. E.g.

python tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors])

can be rewritten as

python tf.pack(tensors, axis=axis)

Args: concat_dim: 0-D int32 Tensor. Dimension along which to concatenate. values: A list of Tensor objects or a single Tensor. name: A name for the operation (optional).

Returns: A Tensor resulting from concatenation of the input tensors.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cond(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cond, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cond

Return

Applicative

Origial documentation for Builder.cond

def cond(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cond to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cond

def cond(pred, fn1, fn2, name=None)

Return either fn1() or fn2() based on the boolean predicate pred.

fn1 and fn2 both return lists of output tensors. fn1 and fn2 must have the same non-zero number and type of outputs.

Note that the conditional execution applies only to the operations defined in fn1 and fn2. Consider the following simple program:

python z = tf.mul(a, b) result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))

If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.mul operation is always executed, unconditionally. Although this behavior is consistent with the dataflow model of TensorFlow, it has occasionally surprised some users who expected a lazier semantics.

Args: pred: A scalar determining whether to return the result of fn1 or fn2. fn1: The callable to be performed if pred is true. fn2: The callable to be performed if pref is false. name: Optional name prefix for the returned tensors.

Returns: Tensors returned by the call to either fn1 or fn2. If the callables return a singleton list, the element is extracted from the list.

Raises: TypeError: if fn1 or fn2 is not callable. ValueError: if fn1 and fn2 do not return the same number of tensors, or return tensors of different types.

Example:

python x = tf.constant(2) y = tf.constant(5) def f1(): return tf.mul(x, 17) def f2(): return tf.add(y, 23) r = cond(tf.less(x, y), f1, f2) # r is set to f1(). # Operations in f2 (e.g., tf.add) are not executed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cond_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cond_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cond_layer

Return

Applicative

Origial documentation for Builder.cond_layer

def cond_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cond, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cond

def cond(pred, fn1, fn2, name=None):

Return either fn1() or fn2() based on the boolean predicate pred.

fn1 and fn2 both return lists of output tensors. fn1 and fn2 must have the same non-zero number and type of outputs.

Note that the conditional execution applies only to the operations defined in fn1 and fn2. Consider the following simple program:

python z = tf.mul(a, b) result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))

If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.mul operation is always executed, unconditionally. Although this behavior is consistent with the dataflow model of TensorFlow, it has occasionally surprised some users who expected a lazier semantics.

Args: pred: A scalar determining whether to return the result of fn1 or fn2. fn1: The callable to be performed if pred is true. fn2: The callable to be performed if pref is false. name: Optional name prefix for the returned tensors.

Returns: Tensors returned by the call to either fn1 or fn2. If the callables return a singleton list, the element is extracted from the list.

Raises: TypeError: if fn1 or fn2 is not callable. ValueError: if fn1 and fn2 do not return the same number of tensors, or return tensors of different types.

Example:

python x = tf.constant(2) y = tf.constant(5) def f1(): return tf.mul(x, 17) def f2(): return tf.add(y, 23) r = cond(tf.less(x, y), f1, f2) # r is set to f1(). # Operations in f2 (e.g., tf.add) are not executed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conj(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conj, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conj

Return

Applicative

Origial documentation for Builder.conj

def conj(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.conj to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.conj

def conj(x, name=None)

Returns the complex conjugate of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form \(a + bj\), where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form \(a - bj\).

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]

If x is real, it is returned unchanged.

Args: x: Tensor to conjugate. Must have numeric type. name: A name for the operation (optional).

Returns: A Tensor that is the conjugate of x (with the same type).

Raises: TypeError: If x is not a numeric tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conj_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conj_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conj_layer

Return

Applicative

Origial documentation for Builder.conj_layer

def conj_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.conj, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.conj

def conj(x, name=None):

Returns the complex conjugate of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form \(a + bj\), where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form \(a - bj\).

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]

If x is real, it is returned unchanged.

Args: x: Tensor to conjugate. Must have numeric type. name: A name for the operation (optional).

Returns: A Tensor that is the conjugate of x (with the same type).

Raises: TypeError: If x is not a numeric tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def constant(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.constant, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.constant

Return

Applicative

Origial documentation for Builder.constant

def constant(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.constant to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.constant

def constant(value, dtype=None, shape=None, name="Const")

Creates a constant tensor.

The resulting tensor is populated with values of type dtype, as specified by arguments value and (optionally) shape (see examples below).

The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the shape argument (if specified). In the case where the list length is less than the number of elements specified by shape, the last element in the list will be used to fill the remaining entries.

The argument shape is optional. If present, it specifies the dimensions of the resulting tensor. If not present, the shape of value is used.

If the argument dtype is not specified, then the type is inferred from the type of value.

For example:

```python # Constant 1-D Tensor populated with value list. tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]

# Constant 2-D tensor populated with scalar value -1. tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.] [-1. -1. -1.]] ```

Args: value: A constant value (or list) of output type dtype.

dtype: The type of the elements of the resulting tensor.

shape: Optional dimensions of resulting tensor.

name: Optional name for the tensor.

Returns: A Constant Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def constant_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.constant_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.constant_initializer

Return

Applicative

Origial documentation for Builder.constant_initializer

def constant_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.constant_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.constant_initializer

def constant_initializer(value=0, dtype=<dtype: 'float32'>)

Returns an initializer that generates tensors with constant values.

The resulting tensor is populated with values of type dtype, as specified by arguments value following the desired shape of the new tensor (see examples below).

The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the desired shape of the tensor. In the case where the total number of elements in value is less than the number of elements required by the tensor shape, the last element in value will be used to fill the remaining entries. If the total number of elements in value is greater than the number of elements required by the tensor shape, the initializer will raise a ValueError.

Args: value: A Python scalar, list of values, or a N-dimensional numpy array. All elements of the initialized variable will be set to the corresponding value in the value argument. dtype: The data type.

Returns: An initializer that generates tensors with constant values.

Examples: The following example can be rewritten using a numpy.ndarray instead of the value list, even reshaped, as shown in the two commented lines below the value list initialization.

```python

import numpy as np import tensorflow as tf

value = [0, 1, 2, 3, 4, 5, 6, 7]

value = np.array(value)

value = value.reshape([2, 4])

init = tf.constant_initializer(value)

print('fitting shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[2, 4], initializer=init) x.initializer.run() print(x.eval())

fitting shape: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.]]

print('larger shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[3, 4], initializer=init) x.initializer.run() print(x.eval())

larger shape: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 7. 7. 7. 7.]]

print('smaller shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[2, 3], initializer=init)

ValueError: Too many elements provided. Needed at most 6, but received 8 ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def constant_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.constant_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.constant_initializer_layer

Return

Applicative

Origial documentation for Builder.constant_initializer_layer

def constant_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.constant_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.constant_initializer

def constant_initializer(value=0, dtype=<dtype: 'float32'>):

Returns an initializer that generates tensors with constant values.

The resulting tensor is populated with values of type dtype, as specified by arguments value following the desired shape of the new tensor (see examples below).

The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the desired shape of the tensor. In the case where the total number of elements in value is less than the number of elements required by the tensor shape, the last element in value will be used to fill the remaining entries. If the total number of elements in value is greater than the number of elements required by the tensor shape, the initializer will raise a ValueError.

Args: value: A Python scalar, list of values, or a N-dimensional numpy array. All elements of the initialized variable will be set to the corresponding value in the value argument. dtype: The data type.

Returns: An initializer that generates tensors with constant values.

Examples: The following example can be rewritten using a numpy.ndarray instead of the value list, even reshaped, as shown in the two commented lines below the value list initialization.

```python

import numpy as np import tensorflow as tf

value = [0, 1, 2, 3, 4, 5, 6, 7]

value = np.array(value)

value = value.reshape([2, 4])

init = tf.constant_initializer(value)

print('fitting shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[2, 4], initializer=init) x.initializer.run() print(x.eval())

fitting shape: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.]]

print('larger shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[3, 4], initializer=init) x.initializer.run() print(x.eval())

larger shape: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 7. 7. 7. 7.]]

print('smaller shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[2, 3], initializer=init)

ValueError: Too many elements provided. Needed at most 6, but received 8 ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def constant_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.constant_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.constant_layer

Return

Applicative

Origial documentation for Builder.constant_layer

def constant_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.constant, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.constant

def constant(value, dtype=None, shape=None, name="Const"):

Creates a constant tensor.

The resulting tensor is populated with values of type dtype, as specified by arguments value and (optionally) shape (see examples below).

The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the shape argument (if specified). In the case where the list length is less than the number of elements specified by shape, the last element in the list will be used to fill the remaining entries.

The argument shape is optional. If present, it specifies the dimensions of the resulting tensor. If not present, the shape of value is used.

If the argument dtype is not specified, then the type is inferred from the type of value.

For example:

```python # Constant 1-D Tensor populated with value list. tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]

# Constant 2-D tensor populated with scalar value -1. tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.] [-1. -1. -1.]] ```

Args: value: A constant value (or list) of output type dtype.

dtype: The type of the elements of the resulting tensor.

shape: Optional dimensions of resulting tensor.

name: Optional name for the tensor.

Returns: A Constant Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def container(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.container, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.container

Return

Applicative

Origial documentation for Builder.container

def container(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.container to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.container

def container(container_name)

Wrapper for Graph.container() using the default graph.

Args: container_name: The container string to use in the context.

Returns: A context manager that specifies the default container to use for newly created stateful ops.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def container_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.container_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.container_layer

Return

Applicative

Origial documentation for Builder.container_layer

def container_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.container, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.container

def container(container_name):

Wrapper for Graph.container() using the default graph.

Args: container_name: The container string to use in the context.

Returns: A context manager that specifies the default container to use for newly created stateful ops.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def control_dependencies(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.control_dependencies, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.control_dependencies

Return

Applicative

Origial documentation for Builder.control_dependencies

def control_dependencies(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.control_dependencies to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.control_dependencies

def control_dependencies(control_inputs)

Wrapper for Graph.control_dependencies() using the default graph.

See Graph.control_dependencies() for more details.

Args: control_inputs: A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies.

Returns: A context manager that specifies control dependencies for all operations constructed within the context.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def control_dependencies_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.control_dependencies_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.control_dependencies_layer

Return

Applicative

Origial documentation for Builder.control_dependencies_layer

def control_dependencies_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.control_dependencies, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.control_dependencies

def control_dependencies(control_inputs):

Wrapper for Graph.control_dependencies() using the default graph.

See Graph.control_dependencies() for more details.

Args: control_inputs: A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies.

Returns: A context manager that specifies control dependencies for all operations constructed within the context.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv1d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv1d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv1d

Return

Applicative

Origial documentation for Builder.conv1d

def conv1d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv1d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv1d

def conv1d(value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

Computes a 1-D convolution given 3-D input and filter tensors.

Given an input tensor of shape [batch, in_width, in_channels] and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. A tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d) and returned to the caller.

Args: value: A 3D Tensor. Must be of type float32 or float64. filters: A 3D Tensor. Must have the same type as input. stride: An integer. The number of entries by which the filter is moved right at each step. padding: 'SAME' or 'VALID' use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from "NHWC", "NCHW". Defaults to "NHWC", the data is stored in the order of [batch, in_width, in_channels]. The "NCHW" format stores data as [batch, in_channels, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv1d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv1d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv1d_layer

Return

Applicative

Origial documentation for Builder.conv1d_layer

def conv1d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv1d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv1d

def conv1d(value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None):

Computes a 1-D convolution given 3-D input and filter tensors.

Given an input tensor of shape [batch, in_width, in_channels] and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. A tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to [batch, out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d) and returned to the caller.

Args: value: A 3D Tensor. Must be of type float32 or float64. filters: A 3D Tensor. Must have the same type as input. stride: An integer. The number of entries by which the filter is moved right at each step. padding: 'SAME' or 'VALID' use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from "NHWC", "NCHW". Defaults to "NHWC", the data is stored in the order of [batch, in_width, in_channels]. The "NCHW" format stores data as [batch, in_channels, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d

Return

Applicative

Origial documentation for Builder.conv2d

def conv2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv2d

def conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

Computes a 2-D convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following:

  1. Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels].
  2. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels].
  3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] =
    sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
                    filter[di, dj, q, k]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Args: input: A Tensor. Must be one of the following types: half, float32, float64. filter: A Tensor. Must have the same type as input. strides: A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_backprop_filter(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_backprop_filter, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_backprop_filter

Return

Applicative

Origial documentation for Builder.conv2d_backprop_filter

def conv2d_backprop_filter(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv2d_backprop_filter to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv2d_backprop_filter

def conv2d_backprop_filter(input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

Computes the gradients of convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: half, float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. filter_sizes: A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, out_channels] tensor. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. Gradient w.r.t. the filter input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_backprop_filter_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_backprop_filter_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_backprop_filter_layer

Return

Applicative

Origial documentation for Builder.conv2d_backprop_filter_layer

def conv2d_backprop_filter_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv2d_backprop_filter, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv2d_backprop_filter

def conv2d_backprop_filter(input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None):

Computes the gradients of convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: half, float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. filter_sizes: A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, out_channels] tensor. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. Gradient w.r.t. the filter input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_backprop_input(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_backprop_input, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_backprop_input

Return

Applicative

Origial documentation for Builder.conv2d_backprop_input

def conv2d_backprop_input(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv2d_backprop_input to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv2d_backprop_input

def conv2d_backprop_input(input_sizes, filter, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

Computes the gradients of convolution with respect to the input.

Args: input_sizes: A Tensor of type int32. An integer vector representing the shape of input, where input is a 4-D [batch, height, width, channels] tensor. filter: A Tensor. Must be one of the following types: half, float32, float64. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. out_backprop: A Tensor. Must have the same type as filter. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as filter. 4-D with shape [batch, in_height, in_width, in_channels]. Gradient w.r.t. the input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_backprop_input_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_backprop_input_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_backprop_input_layer

Return

Applicative

Origial documentation for Builder.conv2d_backprop_input_layer

def conv2d_backprop_input_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv2d_backprop_input, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv2d_backprop_input

def conv2d_backprop_input(input_sizes, filter, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None):

Computes the gradients of convolution with respect to the input.

Args: input_sizes: A Tensor of type int32. An integer vector representing the shape of input, where input is a 4-D [batch, height, width, channels] tensor. filter: A Tensor. Must be one of the following types: half, float32, float64. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. out_backprop: A Tensor. Must have the same type as filter. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as filter. 4-D with shape [batch, in_height, in_width, in_channels]. Gradient w.r.t. the input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_layer

Return

Applicative

Origial documentation for Builder.conv2d_layer

def conv2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv2d

def conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None):

Computes a 2-D convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following:

  1. Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels].
  2. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels].
  3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] =
    sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
                    filter[di, dj, q, k]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Args: input: A Tensor. Must be one of the following types: half, float32, float64. filter: A Tensor. Must have the same type as input. strides: A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. use_cudnn_on_gpu: An optional bool. Defaults to True. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_transpose(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_transpose, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_transpose

Return

Applicative

Origial documentation for Builder.conv2d_transpose

def conv2d_transpose(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv2d_transpose to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv2d_transpose

def conv2d_transpose(value, filter, output_shape, strides, padding="SAME", name=None)

The transpose of conv2d.

This operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv2d rather than an actual deconvolution.

Args: value: A 4-D Tensor of type float and shape [batch, height, width, in_channels]. filter: A 4-D Tensor with the same type as value and shape [height, width, output_channels, in_channels]. filter's in_channels dimension must match that of value. output_shape: A 1-D Tensor representing the output shape of the deconvolution op. strides: A list of ints. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: Optional name for the returned tensor.

Returns: A Tensor with the same type as value.

Raises: ValueError: If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv2d_transpose_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv2d_transpose_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv2d_transpose_layer

Return

Applicative

Origial documentation for Builder.conv2d_transpose_layer

def conv2d_transpose_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv2d_transpose, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv2d_transpose

def conv2d_transpose(value, filter, output_shape, strides, padding="SAME", name=None):

The transpose of conv2d.

This operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv2d rather than an actual deconvolution.

Args: value: A 4-D Tensor of type float and shape [batch, height, width, in_channels]. filter: A 4-D Tensor with the same type as value and shape [height, width, output_channels, in_channels]. filter's in_channels dimension must match that of value. output_shape: A 1-D Tensor representing the output shape of the deconvolution op. strides: A list of ints. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: Optional name for the returned tensor.

Returns: A Tensor with the same type as value.

Raises: ValueError: If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d

Return

Applicative

Origial documentation for Builder.conv3d

def conv3d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv3d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv3d

def conv3d(input, filter, strides, padding, name=None)

Computes a 3-D convolution given 5-D input and filter tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, in_depth, in_height, in_width, in_channels]. filter: A Tensor. Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. in_channels must match between input and filter. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_filter(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_filter, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_filter

Return

Applicative

Origial documentation for Builder.conv3d_backprop_filter

def conv3d_backprop_filter(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv3d_backprop_filter to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv3d_backprop_filter

def conv3d_backprop_filter(input, filter, out_backprop, strides, padding, name=None)

Computes the gradients of 3-D convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, in_channels]. filter: A Tensor. Must have the same type as input. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter. out_backprop: A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_filter_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_filter_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_filter_layer

Return

Applicative

Origial documentation for Builder.conv3d_backprop_filter_layer

def conv3d_backprop_filter_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv3d_backprop_filter, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv3d_backprop_filter

def conv3d_backprop_filter(input, filter, out_backprop, strides, padding, name=None):

Computes the gradients of 3-D convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, in_channels]. filter: A Tensor. Must have the same type as input. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter. out_backprop: A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_filter_v2(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_filter_v2, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_filter_v2

Return

Applicative

Origial documentation for Builder.conv3d_backprop_filter_v2

def conv3d_backprop_filter_v2(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv3d_backprop_filter_v2 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv3d_backprop_filter_v2

def conv3d_backprop_filter_v2(input, filter_sizes, out_backprop, strides, padding, name=None)

Computes the gradients of 3-D convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, in_channels]. filter_sizes: A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 5-D [filter_depth, filter_height, filter_width, in_channels, out_channels] tensor. out_backprop: A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_filter_v2_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_filter_v2_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_filter_v2_layer

Return

Applicative

Origial documentation for Builder.conv3d_backprop_filter_v2_layer

def conv3d_backprop_filter_v2_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv3d_backprop_filter_v2, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv3d_backprop_filter_v2

def conv3d_backprop_filter_v2(input, filter_sizes, out_backprop, strides, padding, name=None):

Computes the gradients of 3-D convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, in_channels]. filter_sizes: A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 5-D [filter_depth, filter_height, filter_width, in_channels, out_channels] tensor. out_backprop: A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_input(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_input, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_input

Return

Applicative

Origial documentation for Builder.conv3d_backprop_input

def conv3d_backprop_input(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv3d_backprop_input to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv3d_backprop_input

def conv3d_backprop_input(input, filter, out_backprop, strides, padding, name=None)

Computes the gradients of 3-D convolution with respect to the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, in_channels]. filter: A Tensor. Must have the same type as input. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter. out_backprop: A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_input_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_input_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_input_layer

Return

Applicative

Origial documentation for Builder.conv3d_backprop_input_layer

def conv3d_backprop_input_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv3d_backprop_input, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv3d_backprop_input

def conv3d_backprop_input(input, filter, out_backprop, strides, padding, name=None):

Computes the gradients of 3-D convolution with respect to the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, in_channels]. filter: A Tensor. Must have the same type as input. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter. out_backprop: A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_input_v2(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_input_v2, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_input_v2

Return

Applicative

Origial documentation for Builder.conv3d_backprop_input_v2

def conv3d_backprop_input_v2(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv3d_backprop_input_v2 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv3d_backprop_input_v2

def conv3d_backprop_input_v2(input_sizes, filter, out_backprop, strides, padding, name=None)

Computes the gradients of 3-D convolution with respect to the input.

Args: input_sizes: A Tensor of type int32. An integer vector representing the tensor shape of input, where input is a 5-D [batch, depth, rows, cols, in_channels] tensor. filter: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter. out_backprop: A Tensor. Must have the same type as filter. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as filter.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_backprop_input_v2_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_backprop_input_v2_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_backprop_input_v2_layer

Return

Applicative

Origial documentation for Builder.conv3d_backprop_input_v2_layer

def conv3d_backprop_input_v2_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv3d_backprop_input_v2, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv3d_backprop_input_v2

def conv3d_backprop_input_v2(input_sizes, filter, out_backprop, strides, padding, name=None):

Computes the gradients of 3-D convolution with respect to the input.

Args: input_sizes: A Tensor of type int32. An integer vector representing the tensor shape of input, where input is a 5-D [batch, depth, rows, cols, in_channels] tensor. filter: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter. out_backprop: A Tensor. Must have the same type as filter. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as filter.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_layer

Return

Applicative

Origial documentation for Builder.conv3d_layer

def conv3d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv3d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv3d

def conv3d(input, filter, strides, padding, name=None):

Computes a 3-D convolution given 5-D input and filter tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, in_depth, in_height, in_width, in_channels]. filter: A Tensor. Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. in_channels must match between input and filter. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_transpose(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_transpose, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_transpose

Return

Applicative

Origial documentation for Builder.conv3d_transpose

def conv3d_transpose(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.conv3d_transpose to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.conv3d_transpose

def conv3d_transpose(value, filter, output_shape, strides, padding="SAME", name=None)

The transpose of conv3d.

This operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv3d rather than an actual deconvolution.

Args: value: A 5-D Tensor of type float and shape [batch, depth, height, width, in_channels]. filter: A 5-D Tensor with the same type as value and shape [depth, height, width, output_channels, in_channels]. filter's in_channels dimension must match that of value. output_shape: A 1-D Tensor representing the output shape of the deconvolution op. strides: A list of ints. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: Optional name for the returned tensor.

Returns: A Tensor with the same type as value.

Raises: ValueError: If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def conv3d_transpose_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.conv3d_transpose_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.conv3d_transpose_layer

Return

Applicative

Origial documentation for Builder.conv3d_transpose_layer

def conv3d_transpose_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.conv3d_transpose, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.conv3d_transpose

def conv3d_transpose(value, filter, output_shape, strides, padding="SAME", name=None):

The transpose of conv3d.

This operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv3d rather than an actual deconvolution.

Args: value: A 5-D Tensor of type float and shape [batch, depth, height, width, in_channels]. filter: A 5-D Tensor with the same type as value and shape [depth, height, width, output_channels, in_channels]. filter's in_channels dimension must match that of value. output_shape: A 1-D Tensor representing the output shape of the deconvolution op. strides: A list of ints. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: Optional name for the returned tensor.

Returns: A Tensor with the same type as value.

Raises: ValueError: If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def convert_to_tensor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.convert_to_tensor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.convert_to_tensor

Return

Applicative

Origial documentation for Builder.convert_to_tensor

def convert_to_tensor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.convert_to_tensor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.convert_to_tensor

def convert_to_tensor(value, dtype=None, name=None, as_ref=False, preferred_dtype=None)

Converts the given value to a Tensor.

This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example:

```python import numpy as np

def my_func(arg): arg = tf.convert_to_tensor(arg, dtype=tf.float32) return tf.matmul(arg, arg) + arg

The following calls are equivalent.

value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) ```

This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects.

Args: value: An object whose type has a registered Tensor conversion function. dtype: Optional element type for the returned tensor. If missing, the type is inferred from the type of value. name: Optional name to use if a new Tensor is created. as_ref: True if we want the result as a ref tensor. Only used if a new Tensor is created. preferred_dtype: Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so preferred_dtype can be used as a soft preference. If the conversion to preferred_dtype is not possible, this argument has no effect.

Returns: A Tensor based on value.

Raises: TypeError: If no conversion function is registered for value. RuntimeError: If a registered conversion function returns an invalid value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def convert_to_tensor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.convert_to_tensor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.convert_to_tensor_layer

Return

Applicative

Origial documentation for Builder.convert_to_tensor_layer

def convert_to_tensor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.convert_to_tensor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.convert_to_tensor

def convert_to_tensor(value, dtype=None, name=None, as_ref=False, preferred_dtype=None):

Converts the given value to a Tensor.

This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example:

```python import numpy as np

def my_func(arg): arg = tf.convert_to_tensor(arg, dtype=tf.float32) return tf.matmul(arg, arg) + arg

The following calls are equivalent.

value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) ```

This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects.

Args: value: An object whose type has a registered Tensor conversion function. dtype: Optional element type for the returned tensor. If missing, the type is inferred from the type of value. name: Optional name to use if a new Tensor is created. as_ref: True if we want the result as a ref tensor. Only used if a new Tensor is created. preferred_dtype: Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so preferred_dtype can be used as a soft preference. If the conversion to preferred_dtype is not possible, this argument has no effect.

Returns: A Tensor based on value.

Raises: TypeError: If no conversion function is registered for value. RuntimeError: If a registered conversion function returns an invalid value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def convert_to_tensor_or_indexed_slices(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.convert_to_tensor_or_indexed_slices, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.convert_to_tensor_or_indexed_slices

Return

Applicative

Origial documentation for Builder.convert_to_tensor_or_indexed_slices

def convert_to_tensor_or_indexed_slices(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.convert_to_tensor_or_indexed_slices to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.convert_to_tensor_or_indexed_slices

def convert_to_tensor_or_indexed_slices(value, dtype=None, name=None, as_ref=False)

Converts the given object to a Tensor or an IndexedSlices.

If value is an IndexedSlices or SparseTensor it is returned unmodified. Otherwise, it is converted to a Tensor using convert_to_tensor().

Args: value: An IndexedSlices, SparseTensor, or an object that can be consumed by convert_to_tensor(). dtype: (Optional.) The required DType of the returned Tensor or IndexedSlices. name: (Optional.) A name to use if a new Tensor is created. as_ref: True if the caller wants the results as ref tensors.

Returns: An Tensor, IndexedSlices, or SparseTensor based on value.

Raises: ValueError: If dtype does not match the element type of value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def convert_to_tensor_or_indexed_slices_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.convert_to_tensor_or_indexed_slices_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.convert_to_tensor_or_indexed_slices_layer

Return

Applicative

Origial documentation for Builder.convert_to_tensor_or_indexed_slices_layer

def convert_to_tensor_or_indexed_slices_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.convert_to_tensor_or_indexed_slices, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.convert_to_tensor_or_indexed_slices

def convert_to_tensor_or_indexed_slices(value, dtype=None, name=None, as_ref=False):

Converts the given object to a Tensor or an IndexedSlices.

If value is an IndexedSlices or SparseTensor it is returned unmodified. Otherwise, it is converted to a Tensor using convert_to_tensor().

Args: value: An IndexedSlices, SparseTensor, or an object that can be consumed by convert_to_tensor(). dtype: (Optional.) The required DType of the returned Tensor or IndexedSlices. name: (Optional.) A name to use if a new Tensor is created. as_ref: True if the caller wants the results as ref tensors.

Returns: An Tensor, IndexedSlices, or SparseTensor based on value.

Raises: ValueError: If dtype does not match the element type of value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def convolution2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.convolution2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.convolution2d

Return

Applicative

Origial documentation for Builder.convolution2d

def convolution2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.contrib.layers.convolution2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.contrib.layers.convolution2d

def convolution2d()

Adds a 2D convolution followed by an optional batch_norm layer.

convolution2d creates a variable called weights, representing the convolutional kernel, that is convolved with the inputs to produce a Tensor of activations. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the activations. Finally, if activation_fn is not None, it is applied to the activations as well.

Performs a'trous convolution with input stride equal to rate if rate is greater than one.

Args: inputs: a 4-D tensor [batch_size, height, width, channels]. num_outputs: integer, the number of output filters. kernel_size: a list of length 2 [kernel_height, kernel_width] of of the filters. Can be an int if both values are the same. stride: a list of length 2 [stride_height, stride_width]. Can be an int if both strides are the same. Note that presently both strides must have the same value. padding: one of VALID or SAME. rate: integer. If less than or equal to 1, a standard convolution is used. If greater than 1, than the a'trous convolution is applied and stride must be set to 1. activation_fn: activation function, set to None to skip it and maintain a linear activation. normalizer_fn: normalization function to use instead of biases. If normalizer_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. default set to None for no normalizer function normalizer_params: normalization function parameters. weights_initializer: An initializer for the weights. weights_regularizer: Optional regularizer for the weights. biases_initializer: An initializer for the biases. If None skip biases. biases_regularizer: Optional regularizer for the biases. reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. variables_collections: optional list of collections for all the variables or a dictionay containing a different list of collection per variable. outputs_collections: collection to add the outputs. trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). scope: Optional scope for variable_scope.

Returns: a tensor representing the output of the operation.

Raises: ValueError: if both 'rate' and stride are larger than one.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def copy(

self)

Returns a compy of the applicative

def copy(self):
    """Returns a compy of the applicative"""
    return self._unit(self.f)

def cos(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cos, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cos

Return

Applicative

Origial documentation for Builder.cos

def cos(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cos to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cos

def cos(x, name=None)

Computes cos of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cos_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cos_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cos_layer

Return

Applicative

Origial documentation for Builder.cos_layer

def cos_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cos, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cos

def cos(x, name=None):

Computes cos of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def count_up_to(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.count_up_to, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.count_up_to

Return

Applicative

Origial documentation for Builder.count_up_to

def count_up_to(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.count_up_to to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.count_up_to

def count_up_to(ref, limit, name=None)

Increments 'ref' until it reaches 'limit'.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the updated value.

Args: ref: A mutable Tensor. Must be one of the following types: int32, int64. Should be from a scalar Variable node. limit: An int. If incrementing ref would bring it above limit, instead generates an 'OutOfRange' error. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as ref. A copy of the input before increment. If nothing else modifies the input, the values produced will all be distinct.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def count_up_to_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.count_up_to_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.count_up_to_layer

Return

Applicative

Origial documentation for Builder.count_up_to_layer

def count_up_to_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.count_up_to, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.count_up_to

def count_up_to(ref, limit, name=None):

Increments 'ref' until it reaches 'limit'.

This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the updated value.

Args: ref: A mutable Tensor. Must be one of the following types: int32, int64. Should be from a scalar Variable node. limit: An int. If incrementing ref would bring it above limit, instead generates an 'OutOfRange' error. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as ref. A copy of the input before increment. If nothing else modifies the input, the values produced will all be distinct.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def create_partitioned_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.create_partitioned_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.create_partitioned_variables

Return

Applicative

Origial documentation for Builder.create_partitioned_variables

def create_partitioned_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.create_partitioned_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.create_partitioned_variables

def create_partitioned_variables(shape, slicing, initializer, dtype=<dtype: 'float32'>, trainable=True, collections=None, name=None, reuse=None)

Create a list of partitioned variables according to the given slicing.

Currently only one dimension of the full variable can be sliced, and the full variable can be reconstructed by the concatenation of the returned list along that dimension.

Args: shape: List of integers. The shape of the full variable. slicing: List of integers. How to partition the variable. Must be of the same length as shape. Each value indicate how many slices to create in the corresponding dimension. Presently only one of the values can be more than 1; that is, the variable can only be sliced along one dimension.

For convenience, The requested number of partitions does not have to
divide the corresponding dimension evenly.  If it does not, the
shapes of the partitions are incremented by 1 starting from partition
0 until all slack is absorbed.  The adjustment rules may change in the
future, but as you can save/restore these variables with different
slicing specifications this should not be a problem.

initializer: A Tensor of shape shape or a variable initializer function. If a function, it will be called once for each slice, passing the shape and data type of the slice as parameters. The function must return a tensor with the same shape as the slice. dtype: Type of the variables. Ignored if initializer is a Tensor. trainable: If True also add all the variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. collections: List of graph collections keys to add the variables to. Defaults to [GraphKeys.VARIABLES]. name: Optional name for the full variable. Defaults to "PartitionedVariable" and gets uniquified automatically. reuse: Boolean or None; if True and name is set, it would reuse previously created variables. if False it will create new variables. if None, it would inherit the parent scope reuse.

Returns: A list of Variables corresponding to the slicing.

Raises: ValueError: If any of the arguments is malformed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def create_partitioned_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.create_partitioned_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.create_partitioned_variables_layer

Return

Applicative

Origial documentation for Builder.create_partitioned_variables_layer

def create_partitioned_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.create_partitioned_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.create_partitioned_variables

def create_partitioned_variables(shape, slicing, initializer, dtype=<dtype: 'float32'>, trainable=True, collections=None, name=None, reuse=None):

Create a list of partitioned variables according to the given slicing.

Currently only one dimension of the full variable can be sliced, and the full variable can be reconstructed by the concatenation of the returned list along that dimension.

Args: shape: List of integers. The shape of the full variable. slicing: List of integers. How to partition the variable. Must be of the same length as shape. Each value indicate how many slices to create in the corresponding dimension. Presently only one of the values can be more than 1; that is, the variable can only be sliced along one dimension.

For convenience, The requested number of partitions does not have to
divide the corresponding dimension evenly.  If it does not, the
shapes of the partitions are incremented by 1 starting from partition
0 until all slack is absorbed.  The adjustment rules may change in the
future, but as you can save/restore these variables with different
slicing specifications this should not be a problem.

initializer: A Tensor of shape shape or a variable initializer function. If a function, it will be called once for each slice, passing the shape and data type of the slice as parameters. The function must return a tensor with the same shape as the slice. dtype: Type of the variables. Ignored if initializer is a Tensor. trainable: If True also add all the variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. collections: List of graph collections keys to add the variables to. Defaults to [GraphKeys.VARIABLES]. name: Optional name for the full variable. Defaults to "PartitionedVariable" and gets uniquified automatically. reuse: Boolean or None; if True and name is set, it would reuse previously created variables. if False it will create new variables. if None, it would inherit the parent scope reuse.

Returns: A list of Variables corresponding to the slicing.

Raises: ValueError: If any of the arguments is malformed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def crelu(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.crelu, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.crelu

Return

Applicative

Origial documentation for Builder.crelu

def crelu(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.crelu to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.crelu

def crelu(features, name=None)

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: https://arxiv.org/abs/1603.05201

Args: features: A Tensor with type float, double, int32, int64, uint8, int16, or int8. name: A name for the operation (optional).

Returns: A Tensor with the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def crelu_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.crelu_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.crelu_layer

Return

Applicative

Origial documentation for Builder.crelu_layer

def crelu_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.crelu, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.crelu

def crelu(features, name=None):

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: https://arxiv.org/abs/1603.05201

Args: features: A Tensor with type float, double, int32, int64, uint8, int16, or int8. name: A name for the operation (optional).

Returns: A Tensor with the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cross(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cross, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cross

Return

Applicative

Origial documentation for Builder.cross

def cross(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cross to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cross

def cross(a, b, name=None)

Compute the pairwise cross product.

a and b must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

Args: a: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. A tensor containing 3-element vectors. b: A Tensor. Must have the same type as a. Another tensor, of same type and shape as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a. Pairwise cross product of the vectors in a and b.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cross_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cross_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cross_layer

Return

Applicative

Origial documentation for Builder.cross_layer

def cross_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cross, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cross

def cross(a, b, name=None):

Compute the pairwise cross product.

a and b must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

Args: a: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. A tensor containing 3-element vectors. b: A Tensor. Must have the same type as a. Another tensor, of same type and shape as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a. Pairwise cross product of the vectors in a and b.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ctc_beam_search_decoder(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ctc_beam_search_decoder, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ctc_beam_search_decoder

Return

Applicative

Origial documentation for Builder.ctc_beam_search_decoder

def ctc_beam_search_decoder(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.ctc_beam_search_decoder to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.ctc_beam_search_decoder

def ctc_beam_search_decoder(inputs, sequence_length, beam_width=100, top_paths=1, merge_repeated=True)

Performs beam search decoding on the logits given in input.

Note The ctc_greedy_decoder is a special case of the ctc_beam_search_decoder with top_paths=1 (but that decoder is faster for this special case).

If merge_repeated is True, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the top path is A B B B B, the return value is:

  • A B if merge_repeated = True.
  • A B B B B if merge_repeated = False.

Args: inputs: 3-D float Tensor, size [max_time x batch_size x num_classes]. The logits. sequence_length: 1-D int32 vector containing sequence lengths, having size [batch_size]. beam_width: An int scalar >= 0 (beam search beam width). top_paths: An int scalar >= 0, <= beam_width (controls output size). merge_repeated: Boolean. Default: True.

Returns: A tuple (decoded, log_probabilities) where decoded: A list of length top_paths, where decoded[j] is a SparseTensor containing the decoded outputs: decoded[j].indices: Indices matrix (total_decoded_outputs[j] x 2) The rows store: [batch, time]. decoded[j].values: Values vector, size (total_decoded_outputs[j]). The vector stores the decoded classes for beam j. decoded[j].shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length[j]]. log_probability: A float matrix (batch_size x top_paths) containing sequence log-probabilities.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ctc_beam_search_decoder_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ctc_beam_search_decoder_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ctc_beam_search_decoder_layer

Return

Applicative

Origial documentation for Builder.ctc_beam_search_decoder_layer

def ctc_beam_search_decoder_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.ctc_beam_search_decoder, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.ctc_beam_search_decoder

def ctc_beam_search_decoder(inputs, sequence_length, beam_width=100, top_paths=1, merge_repeated=True):

Performs beam search decoding on the logits given in input.

Note The ctc_greedy_decoder is a special case of the ctc_beam_search_decoder with top_paths=1 (but that decoder is faster for this special case).

If merge_repeated is True, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the top path is A B B B B, the return value is:

  • A B if merge_repeated = True.
  • A B B B B if merge_repeated = False.

Args: inputs: 3-D float Tensor, size [max_time x batch_size x num_classes]. The logits. sequence_length: 1-D int32 vector containing sequence lengths, having size [batch_size]. beam_width: An int scalar >= 0 (beam search beam width). top_paths: An int scalar >= 0, <= beam_width (controls output size). merge_repeated: Boolean. Default: True.

Returns: A tuple (decoded, log_probabilities) where decoded: A list of length top_paths, where decoded[j] is a SparseTensor containing the decoded outputs: decoded[j].indices: Indices matrix (total_decoded_outputs[j] x 2) The rows store: [batch, time]. decoded[j].values: Values vector, size (total_decoded_outputs[j]). The vector stores the decoded classes for beam j. decoded[j].shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length[j]]. log_probability: A float matrix (batch_size x top_paths) containing sequence log-probabilities.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ctc_greedy_decoder(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ctc_greedy_decoder, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ctc_greedy_decoder

Return

Applicative

Origial documentation for Builder.ctc_greedy_decoder

def ctc_greedy_decoder(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.ctc_greedy_decoder to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.ctc_greedy_decoder

def ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True)

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index (num_classes - 1), no new element is emitted.

If merge_repeated is True, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence A B B * B * B (where '*' is the blank label) becomes

  • A B if merge_repeated=True.
  • A B B B B B if merge_repeated=False.

Args: inputs: 3-D float Tensor sized [max_time x batch_size x num_classes]. The logits. sequence_length: 1-D int32 vector containing sequence lengths, having size [batch_size]. merge_repeated: Boolean. Default: True.

Returns: A tuple (decoded, log_probabilities) where decoded: A single-element list. decoded[0] is an SparseTensor containing the decoded outputs s.t.: decoded.indices: Indices matrix (total_decoded_outputs x 2). The rows store: [batch, time]. decoded.values: Values vector, size (total_decoded_outputs). The vector stores the decoded classes. decoded.shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length] log_probability: A float matrix (batch_size x 1) containing sequence log-probabilities.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ctc_greedy_decoder_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ctc_greedy_decoder_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ctc_greedy_decoder_layer

Return

Applicative

Origial documentation for Builder.ctc_greedy_decoder_layer

def ctc_greedy_decoder_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.ctc_greedy_decoder, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.ctc_greedy_decoder

def ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True):

Performs greedy decoding on the logits given in input (best path).

Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index (num_classes - 1), no new element is emitted.

If merge_repeated is True, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence A B B * B * B (where '*' is the blank label) becomes

  • A B if merge_repeated=True.
  • A B B B B B if merge_repeated=False.

Args: inputs: 3-D float Tensor sized [max_time x batch_size x num_classes]. The logits. sequence_length: 1-D int32 vector containing sequence lengths, having size [batch_size]. merge_repeated: Boolean. Default: True.

Returns: A tuple (decoded, log_probabilities) where decoded: A single-element list. decoded[0] is an SparseTensor containing the decoded outputs s.t.: decoded.indices: Indices matrix (total_decoded_outputs x 2). The rows store: [batch, time]. decoded.values: Values vector, size (total_decoded_outputs). The vector stores the decoded classes. decoded.shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length] log_probability: A float matrix (batch_size x 1) containing sequence log-probabilities.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ctc_loss(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ctc_loss, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ctc_loss

Return

Applicative

Origial documentation for Builder.ctc_loss

def ctc_loss(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.ctc_loss to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.ctc_loss

def ctc_loss(inputs, labels, sequence_length, preprocess_collapse_repeated=False, ctc_merge_repeated=True, time_major=True)

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.

http://www.cs.toronto.edu/~graves/icml_2006.pdf

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The inputs Tensor's innermost dimension size, num_classes, represents num_labels + 1 classes, where num_labels is the number of true labels, and the largest value (num_classes - 1) is reserved for the blank label.

For example, for a vocabulary containing 3 labels [a, b, c], num_classes = 4 and the labels indexing is {a: 0, b: 1, c: 2, blank: 3}.

Regarding the arguments preprocess_collapse_repeated and ctc_merge_repeated:

If preprocess_collapse_repeated is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If ctc_merge_repeated is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

  • preprocess_collapse_repeated=False, ctc_merge_repeated=True

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

  • preprocess_collapse_repeated=True, ctc_merge_repeated=False

Never learns to output repeated classes, as they are collapsed in the input labels before training.

  • preprocess_collapse_repeated=False, ctc_merge_repeated=False

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

  • preprocess_collapse_repeated=True, ctc_merge_repeated=True

Untested. Very likely will not learn to output repeated classes.

Args: inputs: 3-D float Tensor. If time_major == False, this will be a Tensor shaped: [batch_size x max_time x num_classes]. If time_major == True (default), this will be a Tensor shaped: [max_time x batch_size x num_classes]. The logits. labels: An int32 SparseTensor. labels.indices[i, :] == [b, t] means labels.values[i] stores the id for (batch b, time t). labels.values[i] must take on values in [0, num_labels). See core/ops/ctc_ops.cc for more details. sequence_length: 1-D int32 vector, size [batch_size]. The sequence lengths. preprocess_collapse_repeated: Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation. ctc_merge_repeated: Boolean. Default: True. time_major: The shape format of the inputs Tensors. If True, these Tensors must be shaped [max_time, batch_size, num_classes]. If False, these Tensors must be shaped [batch_size, max_time, num_classes]. Using time_major = True (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.

Returns: A 1-D float Tensor, size [batch], containing the negative log probabilities.

Raises: TypeError: if labels is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ctc_loss_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ctc_loss_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ctc_loss_layer

Return

Applicative

Origial documentation for Builder.ctc_loss_layer

def ctc_loss_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.ctc_loss, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.ctc_loss

def ctc_loss(inputs, labels, sequence_length, preprocess_collapse_repeated=False, ctc_merge_repeated=True, time_major=True):

Computes the CTC (Connectionist Temporal Classification) Loss.

This op implements the CTC loss as presented in the article:

A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.

http://www.cs.toronto.edu/~graves/icml_2006.pdf

Input requirements:

``` sequence_length(b) <= time for all b

max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```

Notes:

This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

The inputs Tensor's innermost dimension size, num_classes, represents num_labels + 1 classes, where num_labels is the number of true labels, and the largest value (num_classes - 1) is reserved for the blank label.

For example, for a vocabulary containing 3 labels [a, b, c], num_classes = 4 and the labels indexing is {a: 0, b: 1, c: 2, blank: 3}.

Regarding the arguments preprocess_collapse_repeated and ctc_merge_repeated:

If preprocess_collapse_repeated is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions.

If ctc_merge_repeated is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC.

Here is a table of the (roughly) expected first order behavior:

  • preprocess_collapse_repeated=False, ctc_merge_repeated=True

Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.

  • preprocess_collapse_repeated=True, ctc_merge_repeated=False

Never learns to output repeated classes, as they are collapsed in the input labels before training.

  • preprocess_collapse_repeated=False, ctc_merge_repeated=False

Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.

  • preprocess_collapse_repeated=True, ctc_merge_repeated=True

Untested. Very likely will not learn to output repeated classes.

Args: inputs: 3-D float Tensor. If time_major == False, this will be a Tensor shaped: [batch_size x max_time x num_classes]. If time_major == True (default), this will be a Tensor shaped: [max_time x batch_size x num_classes]. The logits. labels: An int32 SparseTensor. labels.indices[i, :] == [b, t] means labels.values[i] stores the id for (batch b, time t). labels.values[i] must take on values in [0, num_labels). See core/ops/ctc_ops.cc for more details. sequence_length: 1-D int32 vector, size [batch_size]. The sequence lengths. preprocess_collapse_repeated: Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation. ctc_merge_repeated: Boolean. Default: True. time_major: The shape format of the inputs Tensors. If True, these Tensors must be shaped [max_time, batch_size, num_classes]. If False, these Tensors must be shaped [batch_size, max_time, num_classes]. Using time_major = True (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form.

Returns: A 1-D float Tensor, size [batch], containing the negative log probabilities.

Raises: TypeError: if labels is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cumprod(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cumprod, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cumprod

Return

Applicative

Origial documentation for Builder.cumprod

def cumprod(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cumprod to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cumprod

def cumprod(x, axis=0, exclusive=False, reverse=False, name=None)

Compute the cumulative product of the tensor x along axis.

By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output: prettyprint tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]

By setting the exclusive kwarg to True, an exclusive cumprod is performed instead: prettyprint tf.cumprod([a, b, c], exclusive=True) ==> [0, a, a * b]

By setting the reverse kwarg to True, the cumprod is performed in the opposite direction: prettyprint tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c] This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined: prettyprint tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 0]

Args: x: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. axis: A Tensor of type int32 (default: 0). reverse: A bool (default: False). name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cumprod_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cumprod_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cumprod_layer

Return

Applicative

Origial documentation for Builder.cumprod_layer

def cumprod_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cumprod, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cumprod

def cumprod(x, axis=0, exclusive=False, reverse=False, name=None):

Compute the cumulative product of the tensor x along axis.

By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output: prettyprint tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]

By setting the exclusive kwarg to True, an exclusive cumprod is performed instead: prettyprint tf.cumprod([a, b, c], exclusive=True) ==> [0, a, a * b]

By setting the reverse kwarg to True, the cumprod is performed in the opposite direction: prettyprint tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c] This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined: prettyprint tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 0]

Args: x: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. axis: A Tensor of type int32 (default: 0). reverse: A bool (default: False). name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cumsum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cumsum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cumsum

Return

Applicative

Origial documentation for Builder.cumsum

def cumsum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.cumsum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.cumsum

def cumsum(x, axis=0, exclusive=False, reverse=False, name=None)

Compute the cumulative sum of the tensor x along axis.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output: prettyprint tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]

By setting the exclusive kwarg to True, an exclusive cumsum is performed instead: prettyprint tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]

By setting the reverse kwarg to True, the cumsum is performed in the opposite direction: prettyprint tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c] This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined: prettyprint tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]

Args: x: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. axis: A Tensor of type int32 (default: 0). reverse: A bool (default: False). name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def cumsum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.cumsum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.cumsum_layer

Return

Applicative

Origial documentation for Builder.cumsum_layer

def cumsum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.cumsum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.cumsum

def cumsum(x, axis=0, exclusive=False, reverse=False, name=None):

Compute the cumulative sum of the tensor x along axis.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output: prettyprint tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]

By setting the exclusive kwarg to True, an exclusive cumsum is performed instead: prettyprint tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]

By setting the reverse kwarg to True, the cumsum is performed in the opposite direction: prettyprint tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c] This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined: prettyprint tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]

Args: x: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. axis: A Tensor of type int32 (default: 0). reverse: A bool (default: False). name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_base64(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_base64, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_base64

Return

Applicative

Origial documentation for Builder.decode_base64

def decode_base64(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.decode_base64 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.decode_base64

def decode_base64(input, name=None)

Decode web-safe base64-encoded strings.

Input may or may not have padding at the end. See EncodeBase64 for padding. Web-safe means that input must use - and _ instead of + and /.

Args: input: A Tensor of type string. Base64 strings to decode. name: A name for the operation (optional).

Returns: A Tensor of type string. Decoded strings.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_base64_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_base64_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_base64_layer

Return

Applicative

Origial documentation for Builder.decode_base64_layer

def decode_base64_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.decode_base64, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.decode_base64

def decode_base64(input, name=None):

Decode web-safe base64-encoded strings.

Input may or may not have padding at the end. See EncodeBase64 for padding. Web-safe means that input must use - and _ instead of + and /.

Args: input: A Tensor of type string. Base64 strings to decode. name: A name for the operation (optional).

Returns: A Tensor of type string. Decoded strings.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_csv(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_csv, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_csv

Return

Applicative

Origial documentation for Builder.decode_csv

def decode_csv(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.decode_csv to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.decode_csv

def decode_csv(records, record_defaults, field_delim=None, name=None)

Convert CSV records to tensors. Each column maps to one tensor.

RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.

Args: records: A Tensor of type string. Each string is a record/row in the csv and all records should have the same format. record_defaults: A list of Tensor objects with types from: float32, int32, int64, string. One tensor per column of the input record, with either a scalar default value for that column or empty if the column is required. field_delim: An optional string. Defaults to ",". delimiter to separate fields in a record. name: A name for the operation (optional).

Returns: A list of Tensor objects. Has the same type as record_defaults. Each tensor will have the same shape as records.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_csv_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_csv_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_csv_layer

Return

Applicative

Origial documentation for Builder.decode_csv_layer

def decode_csv_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.decode_csv, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.decode_csv

def decode_csv(records, record_defaults, field_delim=None, name=None):

Convert CSV records to tensors. Each column maps to one tensor.

RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.

Args: records: A Tensor of type string. Each string is a record/row in the csv and all records should have the same format. record_defaults: A list of Tensor objects with types from: float32, int32, int64, string. One tensor per column of the input record, with either a scalar default value for that column or empty if the column is required. field_delim: An optional string. Defaults to ",". delimiter to separate fields in a record. name: A name for the operation (optional).

Returns: A list of Tensor objects. Has the same type as record_defaults. Each tensor will have the same shape as records.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_json_example(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_json_example, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_json_example

Return

Applicative

Origial documentation for Builder.decode_json_example

def decode_json_example(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.decode_json_example to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.decode_json_example

def decode_json_example(json_examples, name=None)

Convert JSON-encoded Example records to binary protocol buffer strings.

This op translates a tensor containing Example records, encoded using the standard JSON mapping, into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.

Args: json_examples: A Tensor of type string. Each string is a JSON object serialized according to the JSON mapping of the Example proto. name: A name for the operation (optional).

Returns: A Tensor of type string. Each string is a binary Example protocol buffer corresponding to the respective element of json_examples.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_json_example_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_json_example_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_json_example_layer

Return

Applicative

Origial documentation for Builder.decode_json_example_layer

def decode_json_example_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.decode_json_example, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.decode_json_example

def decode_json_example(json_examples, name=None):

Convert JSON-encoded Example records to binary protocol buffer strings.

This op translates a tensor containing Example records, encoded using the standard JSON mapping, into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.

Args: json_examples: A Tensor of type string. Each string is a JSON object serialized according to the JSON mapping of the Example proto. name: A name for the operation (optional).

Returns: A Tensor of type string. Each string is a binary Example protocol buffer corresponding to the respective element of json_examples.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_raw(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_raw, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_raw

Return

Applicative

Origial documentation for Builder.decode_raw

def decode_raw(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.decode_raw to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.decode_raw

def decode_raw(bytes, out_type, little_endian=None, name=None)

Reinterpret the bytes of a string as a vector of numbers.

Args: bytes: A Tensor of type string. All the elements must have the same length. out_type: A tf.DType from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64. little_endian: An optional bool. Defaults to True. Whether the input bytes are in little-endian order. Ignored for out_type values that are stored in a single byte like uint8. name: A name for the operation (optional).

Returns: A Tensor of type out_type. A Tensor with one more dimension than the input bytes. The added dimension will have size equal to the length of the elements of bytes divided by the number of bytes to represent out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def decode_raw_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.decode_raw_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.decode_raw_layer

Return

Applicative

Origial documentation for Builder.decode_raw_layer

def decode_raw_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.decode_raw, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.decode_raw

def decode_raw(bytes, out_type, little_endian=None, name=None):

Reinterpret the bytes of a string as a vector of numbers.

Args: bytes: A Tensor of type string. All the elements must have the same length. out_type: A tf.DType from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64. little_endian: An optional bool. Defaults to True. Whether the input bytes are in little-endian order. Ignored for out_type values that are stored in a single byte like uint8. name: A name for the operation (optional).

Returns: A Tensor of type out_type. A Tensor with one more dimension than the input bytes. The added dimension will have size equal to the length of the elements of bytes divided by the number of bytes to represent out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def delete_session_tensor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.delete_session_tensor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.delete_session_tensor

Return

Applicative

Origial documentation for Builder.delete_session_tensor

def delete_session_tensor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.delete_session_tensor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.delete_session_tensor

def delete_session_tensor(handle, name=None)

Delete the tensor for the given tensor handle.

This is EXPERIMENTAL and subject to change.

Delete the tensor of a given tensor handle. The tensor is produced in a previous run() and stored in the state of the session.

Args: handle: The string representation of a persistent tensor handle. name: Optional name prefix for the return tensor.

Returns: A pair of graph elements. The first is a placeholder for feeding a tensor handle and the second is a deletion operation.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def delete_session_tensor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.delete_session_tensor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.delete_session_tensor_layer

Return

Applicative

Origial documentation for Builder.delete_session_tensor_layer

def delete_session_tensor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.delete_session_tensor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.delete_session_tensor

def delete_session_tensor(handle, name=None):

Delete the tensor for the given tensor handle.

This is EXPERIMENTAL and subject to change.

Delete the tensor of a given tensor handle. The tensor is produced in a previous run() and stored in the state of the session.

Args: handle: The string representation of a persistent tensor handle. name: Optional name prefix for the return tensor.

Returns: A pair of graph elements. The first is a placeholder for feeding a tensor handle and the second is a deletion operation.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depth_to_space(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depth_to_space, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depth_to_space

Return

Applicative

Origial documentation for Builder.depth_to_space

def depth_to_space(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.depth_to_space to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.depth_to_space

def depth_to_space(input, block_size, name=None)

DepthToSpace for tensors of type T.

Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The attr block_size indicates the input block size and how the data is moved.

  • Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size
  • The width the output tensor is input_depth * block_size, whereas the height is input_height * block_size.
  • The depth of the input tensor must be divisible by block_size * block_size.

That is, assuming the input is in the shape: [batch, height, width, depth], the shape of the output will be: [batch, height*block_size, width*block_size, depth/(block_size*block_size)]

This operation requires that the input tensor be of rank 4, and that block_size be >=1 and that block_size * block_size be a divisor of the input depth.

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given this input of shape [1, 1, 1, 4], and a block size of 2:

```prettyprint x = [[[[1, 2, 3, 4]]]]

```

This operation will output a tensor of shape [1, 2, 2, 1]:

prettyprint [[[[1], [2]], [[3], [4]]]]

Here, the input has a batch of 1 and each batch element has shape [1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)). The output element shape is [2, 2, 1].

For an input tensor with larger depth, here of shape [1, 1, 1, 12], e.g.

prettyprint x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

This operation, for block size of 2, will return the following tensor of shape [1, 2, 2, 3]

```prettyprint [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]

```

Similarly, for the following input of shape [1 2 2 4], and a block size of 2:

prettyprint x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]]

the operator will return the following tensor of shape [1 4 4 1]:

```prettyprint x = [[ [1], [2], [5], [6]], [ [3], [4], [7], [8]], [ [9], [10], [13], [14]], [ [11], [12], [15], [16]]]

```

Args: input: A Tensor. block_size: An int that is >= 2. The size of the spatial block, same as in Space2Depth. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depth_to_space_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depth_to_space_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depth_to_space_layer

Return

Applicative

Origial documentation for Builder.depth_to_space_layer

def depth_to_space_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.depth_to_space, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.depth_to_space

def depth_to_space(input, block_size, name=None):

DepthToSpace for tensors of type T.

Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The attr block_size indicates the input block size and how the data is moved.

  • Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size
  • The width the output tensor is input_depth * block_size, whereas the height is input_height * block_size.
  • The depth of the input tensor must be divisible by block_size * block_size.

That is, assuming the input is in the shape: [batch, height, width, depth], the shape of the output will be: [batch, height*block_size, width*block_size, depth/(block_size*block_size)]

This operation requires that the input tensor be of rank 4, and that block_size be >=1 and that block_size * block_size be a divisor of the input depth.

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given this input of shape [1, 1, 1, 4], and a block size of 2:

```prettyprint x = [[[[1, 2, 3, 4]]]]

```

This operation will output a tensor of shape [1, 2, 2, 1]:

prettyprint [[[[1], [2]], [[3], [4]]]]

Here, the input has a batch of 1 and each batch element has shape [1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)). The output element shape is [2, 2, 1].

For an input tensor with larger depth, here of shape [1, 1, 1, 12], e.g.

prettyprint x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

This operation, for block size of 2, will return the following tensor of shape [1, 2, 2, 3]

```prettyprint [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]

```

Similarly, for the following input of shape [1 2 2 4], and a block size of 2:

prettyprint x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]]

the operator will return the following tensor of shape [1 4 4 1]:

```prettyprint x = [[ [1], [2], [5], [6]], [ [3], [4], [7], [8]], [ [9], [10], [13], [14]], [ [11], [12], [15], [16]]]

```

Args: input: A Tensor. block_size: An int that is >= 2. The size of the spatial block, same as in Space2Depth. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d

Return

Applicative

Origial documentation for Builder.depthwise_conv2d

def depthwise_conv2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.depthwise_conv2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.depthwise_conv2d

def depthwise_conv2d(input, filter, strides, padding, name=None)

Depthwise 2-D convolution.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter tensor of shape [filter_height, filter_width, in_channels, channel_multiplier] containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. The output has in_channels * channel_multiplier channels.

In detail,

output[b, i, j, k * channel_multiplier + q] =
    sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
                 filter[di, dj, k, q]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1].

Args: input: 4-D with shape [batch, in_height, in_width, in_channels]. filter: 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. strides: 1-D of size 4. The stride of the sliding window for each dimension of input. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: A name for this operation (optional).

Returns: A 4-D Tensor of shape [batch, out_height, out_width, in_channels * channel_multiplier].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_layer

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_layer

def depthwise_conv2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.depthwise_conv2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.depthwise_conv2d

def depthwise_conv2d(input, filter, strides, padding, name=None):

Depthwise 2-D convolution.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter tensor of shape [filter_height, filter_width, in_channels, channel_multiplier] containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. The output has in_channels * channel_multiplier channels.

In detail,

output[b, i, j, k * channel_multiplier + q] =
    sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
                 filter[di, dj, k, q]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1].

Args: input: 4-D with shape [batch, in_height, in_width, in_channels]. filter: 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. strides: 1-D of size 4. The stride of the sliding window for each dimension of input. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: A name for this operation (optional).

Returns: A 4-D Tensor of shape [batch, out_height, out_width, in_channels * channel_multiplier].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_native(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_native, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_native

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_native

def depthwise_conv2d_native(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.depthwise_conv2d_native to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.depthwise_conv2d_native

def depthwise_conv2d_native(input, filter, strides, padding, name=None)

Computes a 2-D depthwise convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, channel_multiplier], containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. Thus, the output has in_channels * channel_multiplier channels.

for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Args: input: A Tensor. Must be one of the following types: float32, float64. filter: A Tensor. Must have the same type as input. strides: A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_native_backprop_filter(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_native_backprop_filter, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_native_backprop_filter

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_native_backprop_filter

def depthwise_conv2d_native_backprop_filter(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.depthwise_conv2d_native_backprop_filter to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.depthwise_conv2d_native_backprop_filter

def depthwise_conv2d_native_backprop_filter(input, filter_sizes, out_backprop, strides, padding, name=None)

Computes the gradients of depthwise convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. filter_sizes: A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, depthwise_multiplier] tensor. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. Gradient w.r.t. the filter input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_native_backprop_filter_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_native_backprop_filter_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_native_backprop_filter_layer

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_native_backprop_filter_layer

def depthwise_conv2d_native_backprop_filter_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.depthwise_conv2d_native_backprop_filter, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.depthwise_conv2d_native_backprop_filter

def depthwise_conv2d_native_backprop_filter(input, filter_sizes, out_backprop, strides, padding, name=None):

Computes the gradients of depthwise convolution with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. filter_sizes: A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, depthwise_multiplier] tensor. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. Gradient w.r.t. the filter input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_native_backprop_input(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_native_backprop_input, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_native_backprop_input

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_native_backprop_input

def depthwise_conv2d_native_backprop_input(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.depthwise_conv2d_native_backprop_input to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.depthwise_conv2d_native_backprop_input

def depthwise_conv2d_native_backprop_input(input_sizes, filter, out_backprop, strides, padding, name=None)

Computes the gradients of depthwise convolution with respect to the input.

Args: input_sizes: A Tensor of type int32. An integer vector representing the shape of input, where input is a 4-D [batch, height, width, channels] tensor. filter: A Tensor. Must be one of the following types: float32, float64. 4-D with shape [filter_height, filter_width, in_channels, depthwise_multiplier]. out_backprop: A Tensor. Must have the same type as filter. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as filter. 4-D with shape [batch, in_height, in_width, in_channels]. Gradient w.r.t. the input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_native_backprop_input_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_native_backprop_input_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_native_backprop_input_layer

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_native_backprop_input_layer

def depthwise_conv2d_native_backprop_input_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.depthwise_conv2d_native_backprop_input, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.depthwise_conv2d_native_backprop_input

def depthwise_conv2d_native_backprop_input(input_sizes, filter, out_backprop, strides, padding, name=None):

Computes the gradients of depthwise convolution with respect to the input.

Args: input_sizes: A Tensor of type int32. An integer vector representing the shape of input, where input is a 4-D [batch, height, width, channels] tensor. filter: A Tensor. Must be one of the following types: float32, float64. 4-D with shape [filter_height, filter_width, in_channels, depthwise_multiplier]. out_backprop: A Tensor. Must have the same type as filter. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides: A list of ints. The stride of the sliding window for each dimension of the input of the convolution. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as filter. 4-D with shape [batch, in_height, in_width, in_channels]. Gradient w.r.t. the input of the convolution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def depthwise_conv2d_native_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.depthwise_conv2d_native_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.depthwise_conv2d_native_layer

Return

Applicative

Origial documentation for Builder.depthwise_conv2d_native_layer

def depthwise_conv2d_native_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.depthwise_conv2d_native, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.depthwise_conv2d_native

def depthwise_conv2d_native(input, filter, strides, padding, name=None):

Computes a 2-D depthwise convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, channel_multiplier], containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. Thus, the output has in_channels * channel_multiplier channels.

for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Args: input: A Tensor. Must be one of the following types: float32, float64. filter: A Tensor. Must have the same type as input. strides: A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def deserialize_many_sparse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.deserialize_many_sparse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.deserialize_many_sparse

Return

Applicative

Origial documentation for Builder.deserialize_many_sparse

def deserialize_many_sparse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.deserialize_many_sparse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.deserialize_many_sparse

def deserialize_many_sparse(serialized_sparse, dtype, rank=None, name=None)

Deserialize and concatenate SparseTensors from a serialized minibatch.

The input serialized_sparse must be a string matrix of shape [N x 3] where N is the minibatch size and the rows correspond to packed outputs of serialize_sparse. The ranks of the original SparseTensor objects must all match. When the final SparseTensor is created, it has rank one higher than the ranks of the incoming SparseTensor objects (they have been concatenated along a new row dimension).

The output SparseTensor object's shape values for all dimensions but the first are the max across the input SparseTensor objects' shape values for the corresponding dimensions. Its first shape value is N, the minibatch size.

The input SparseTensor objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run sparse_reorder to restore index ordering.

For example, if the serialized input is a [2, 3] matrix representing two original SparseTensor objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized SparseTensor will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Args: serialized_sparse: 2-D Tensor of type string of shape [N, 3]. The serialized and packed SparseTensor objects. dtype: The dtype of the serialized SparseTensor objects. rank: (optional) Python int, the rank of the SparseTensor objects. name: A name prefix for the returned tensors (optional)

Returns: A SparseTensor representing the deserialized SparseTensors, concatenated along the SparseTensors' first dimension.

All of the serialized SparseTensors must have had the same rank and type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def deserialize_many_sparse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.deserialize_many_sparse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.deserialize_many_sparse_layer

Return

Applicative

Origial documentation for Builder.deserialize_many_sparse_layer

def deserialize_many_sparse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.deserialize_many_sparse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.deserialize_many_sparse

def deserialize_many_sparse(serialized_sparse, dtype, rank=None, name=None):

Deserialize and concatenate SparseTensors from a serialized minibatch.

The input serialized_sparse must be a string matrix of shape [N x 3] where N is the minibatch size and the rows correspond to packed outputs of serialize_sparse. The ranks of the original SparseTensor objects must all match. When the final SparseTensor is created, it has rank one higher than the ranks of the incoming SparseTensor objects (they have been concatenated along a new row dimension).

The output SparseTensor object's shape values for all dimensions but the first are the max across the input SparseTensor objects' shape values for the corresponding dimensions. Its first shape value is N, the minibatch size.

The input SparseTensor objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run sparse_reorder to restore index ordering.

For example, if the serialized input is a [2, 3] matrix representing two original SparseTensor objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized SparseTensor will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Args: serialized_sparse: 2-D Tensor of type string of shape [N, 3]. The serialized and packed SparseTensor objects. dtype: The dtype of the serialized SparseTensor objects. rank: (optional) Python int, the rank of the SparseTensor objects. name: A name prefix for the returned tensors (optional)

Returns: A SparseTensor representing the deserialized SparseTensors, concatenated along the SparseTensors' first dimension.

All of the serialized SparseTensors must have had the same rank and type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def diag(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.diag, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.diag

Return

Applicative

Origial documentation for Builder.diag

def diag(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.diag to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.diag

def diag(diagonal, name=None)

Returns a diagonal tensor with a given diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik] and 0 everywhere else.

For example:

```prettyprint

'diagonal' is [1, 2, 3, 4]

tf.diag(diagonal) ==> [[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]] ```

Args: diagonal: A Tensor. Must be one of the following types: float32, float64, int32, int64, complex64, complex128. Rank k tensor where k is at most 3. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as diagonal.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def diag_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.diag_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.diag_layer

Return

Applicative

Origial documentation for Builder.diag_layer

def diag_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.diag, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.diag

def diag(diagonal, name=None):

Returns a diagonal tensor with a given diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik] and 0 everywhere else.

For example:

```prettyprint

'diagonal' is [1, 2, 3, 4]

tf.diag(diagonal) ==> [[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]] ```

Args: diagonal: A Tensor. Must be one of the following types: float32, float64, int32, int64, complex64, complex128. Rank k tensor where k is at most 3. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as diagonal.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def diag_part(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.diag_part, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.diag_part

Return

Applicative

Origial documentation for Builder.diag_part

def diag_part(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.diag_part to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.diag_part

def diag_part(input, name=None)

Returns the diagonal part of the tensor.

This operation returns a tensor with the diagonal part of the input. The diagonal part is computed as follows:

Assume input has dimensions [D1,..., Dk, D1,..., Dk], then the output is a tensor of rank k with dimensions [D1,..., Dk] where:

diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik].

For example:

```prettyprint

'input' is [[1, 0, 0, 0]

          [0, 2, 0, 0]
          [0, 0, 3, 0]
          [0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4] ```

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, complex64, complex128. Rank k tensor where k is 2, 4, or 6. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The extracted diagonal.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def diag_part_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.diag_part_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.diag_part_layer

Return

Applicative

Origial documentation for Builder.diag_part_layer

def diag_part_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.diag_part, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.diag_part

def diag_part(input, name=None):

Returns the diagonal part of the tensor.

This operation returns a tensor with the diagonal part of the input. The diagonal part is computed as follows:

Assume input has dimensions [D1,..., Dk, D1,..., Dk], then the output is a tensor of rank k with dimensions [D1,..., Dk] where:

diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik].

For example:

```prettyprint

'input' is [[1, 0, 0, 0]

          [0, 2, 0, 0]
          [0, 0, 3, 0]
          [0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4] ```

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, complex64, complex128. Rank k tensor where k is 2, 4, or 6. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The extracted diagonal.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def digamma(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.digamma, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.digamma

Return

Applicative

Origial documentation for Builder.digamma

def digamma(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.digamma to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.digamma

def digamma(x, name=None)

Computes Psi, the derivative of Lgamma (the log of the absolute value of

Gamma(x)), element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def digamma_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.digamma_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.digamma_layer

Return

Applicative

Origial documentation for Builder.digamma_layer

def digamma_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.digamma, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.digamma

def digamma(x, name=None):

Computes Psi, the derivative of Lgamma (the log of the absolute value of

Gamma(x)), element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dilation2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dilation2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dilation2d

Return

Applicative

Origial documentation for Builder.dilation2d

def dilation2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.dilation2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.dilation2d

def dilation2d(input, filter, strides, rates, padding, name=None)

Computes the grayscale dilation of 4-D input and 3-D filter tensors.

The input tensor has shape [batch, in_height, in_width, depth] and the filter tensor has shape [filter_height, filter_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format.

In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with conv2d, we use unmirrored filters):

output[b, y, x, c] =
   max_{dy, dx} input[b,
                      strides[1] * y + rates[1] * dy,
                      strides[2] * x + rates[2] * dx,
                      c] +
                filter[dy, dx, c]

Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.

Note on duality: The dilation of input by the filter is equal to the negation of the erosion of -input by the reflected filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D with shape [batch, in_height, in_width, depth]. filter: A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [batch, out_height, out_width, depth].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dilation2d_backprop_filter(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dilation2d_backprop_filter, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dilation2d_backprop_filter

Return

Applicative

Origial documentation for Builder.dilation2d_backprop_filter

def dilation2d_backprop_filter(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.dilation2d_backprop_filter to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.dilation2d_backprop_filter

def dilation2d_backprop_filter(input, filter, out_backprop, strides, rates, padding, name=None)

Computes the gradient of morphological 2-D dilation with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D with shape [batch, in_height, in_width, depth]. filter: A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, depth]. strides: A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 3-D with shape [filter_height, filter_width, depth].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dilation2d_backprop_filter_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dilation2d_backprop_filter_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dilation2d_backprop_filter_layer

Return

Applicative

Origial documentation for Builder.dilation2d_backprop_filter_layer

def dilation2d_backprop_filter_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.dilation2d_backprop_filter, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.dilation2d_backprop_filter

def dilation2d_backprop_filter(input, filter, out_backprop, strides, rates, padding, name=None):

Computes the gradient of morphological 2-D dilation with respect to the filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D with shape [batch, in_height, in_width, depth]. filter: A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, depth]. strides: A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 3-D with shape [filter_height, filter_width, depth].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dilation2d_backprop_input(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dilation2d_backprop_input, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dilation2d_backprop_input

Return

Applicative

Origial documentation for Builder.dilation2d_backprop_input

def dilation2d_backprop_input(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.dilation2d_backprop_input to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.dilation2d_backprop_input

def dilation2d_backprop_input(input, filter, out_backprop, strides, rates, padding, name=None)

Computes the gradient of morphological 2-D dilation with respect to the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D with shape [batch, in_height, in_width, depth]. filter: A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, depth]. strides: A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [batch, in_height, in_width, depth].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dilation2d_backprop_input_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dilation2d_backprop_input_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dilation2d_backprop_input_layer

Return

Applicative

Origial documentation for Builder.dilation2d_backprop_input_layer

def dilation2d_backprop_input_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.dilation2d_backprop_input, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.dilation2d_backprop_input

def dilation2d_backprop_input(input, filter, out_backprop, strides, rates, padding, name=None):

Computes the gradient of morphological 2-D dilation with respect to the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D with shape [batch, in_height, in_width, depth]. filter: A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. out_backprop: A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, depth]. strides: A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [batch, in_height, in_width, depth].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dilation2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dilation2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dilation2d_layer

Return

Applicative

Origial documentation for Builder.dilation2d_layer

def dilation2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.dilation2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.dilation2d

def dilation2d(input, filter, strides, rates, padding, name=None):

Computes the grayscale dilation of 4-D input and 3-D filter tensors.

The input tensor has shape [batch, in_height, in_width, depth] and the filter tensor has shape [filter_height, filter_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format.

In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with conv2d, we use unmirrored filters):

output[b, y, x, c] =
   max_{dy, dx} input[b,
                      strides[1] * y + rates[1] * dy,
                      strides[2] * x + rates[2] * dx,
                      c] +
                filter[dy, dx, c]

Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.

Note on duality: The dilation of input by the filter is equal to the negation of the erosion of -input by the reflected filter.

Args: input: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D with shape [batch, in_height, in_width, depth]. filter: A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. 4-D with shape [batch, out_height, out_width, depth].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def div(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.div, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.div

Return

Applicative

Origial documentation for Builder.div

def div(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.div to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.div

def div(x, y, name=None)

Returns x / y element-wise.

NOTE: Div supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def div_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.div_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.div_layer

Return

Applicative

Origial documentation for Builder.div_layer

def div_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.div, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.div

def div(x, y, name=None):

Returns x / y element-wise.

NOTE: Div supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def drop_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.drop_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.drop_layer

Return

Applicative

Origial documentation for Builder.drop_layer

def drop_layer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tensorbuilder.Builder.drop_layer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tensorbuilder.Builder.drop_layer

def drop_layer(x, keep_prob, seed=None, name=None)

Computes dropout. With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.

Args: x: A tensor. keep_prob: A scalar Tensor with the same type as x. The probability that each element is kept. noise_shape: A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. name: A name for this operation (optional). Returns: A Tensor of the same shape of x. Raises: ValueError: If keep_prob is not in (0, 1].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dropout(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dropout, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dropout

Return

Applicative

Origial documentation for Builder.dropout

def dropout(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.dropout to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.dropout

def dropout(x, keep_prob, noise_shape=None, seed=None, name=None)

Computes dropout.

With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.

By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.

Args: x: A tensor. keep_prob: A scalar Tensor with the same type as x. The probability that each element is kept. noise_shape: A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. name: A name for this operation (optional).

Returns: A Tensor of the same shape of x.

Raises: ValueError: If keep_prob is not in (0, 1].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dropout_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dropout_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dropout_layer

Return

Applicative

Origial documentation for Builder.dropout_layer

def dropout_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.dropout, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.dropout

def dropout(x, keep_prob, noise_shape=None, seed=None, name=None):

Computes dropout.

With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.

By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.

Args: x: A tensor. keep_prob: A scalar Tensor with the same type as x. The probability that each element is kept. noise_shape: A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. name: A name for this operation (optional).

Returns: A Tensor of the same shape of x.

Raises: ValueError: If keep_prob is not in (0, 1].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dynamic_partition(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dynamic_partition, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dynamic_partition

Return

Applicative

Origial documentation for Builder.dynamic_partition

def dynamic_partition(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.dynamic_partition to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.dynamic_partition

def dynamic_partition(data, partitions, num_partitions, name=None)

Partitions data into num_partitions tensors using indices from partitions.

For each index tuple js of size partitions.ndim, the slice data[js, ...] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitions equal to i. In detail,

outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

outputs[i] = pack([data[js, ...] for js if partitions[js] == i])

data.shape must start with partitions.shape.

For example:

# Scalar partitions
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = []  # Empty with shape [0, 2]
outputs[1] = [[10, 20]]

# Vector partitions
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]

Args: data: A Tensor. partitions: A Tensor of type int32. Any shape. Indices in the range [0, num_partitions). num_partitions: An int that is >= 1. The number of partitions to output. name: A name for the operation (optional).

Returns: A list of num_partitions Tensor objects of the same type as data.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dynamic_partition_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dynamic_partition_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dynamic_partition_layer

Return

Applicative

Origial documentation for Builder.dynamic_partition_layer

def dynamic_partition_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.dynamic_partition, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.dynamic_partition

def dynamic_partition(data, partitions, num_partitions, name=None):

Partitions data into num_partitions tensors using indices from partitions.

For each index tuple js of size partitions.ndim, the slice data[js, ...] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitions equal to i. In detail,

outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

outputs[i] = pack([data[js, ...] for js if partitions[js] == i])

data.shape must start with partitions.shape.

For example:

# Scalar partitions
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = []  # Empty with shape [0, 2]
outputs[1] = [[10, 20]]

# Vector partitions
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]

Args: data: A Tensor. partitions: A Tensor of type int32. Any shape. Indices in the range [0, num_partitions). num_partitions: An int that is >= 1. The number of partitions to output. name: A name for the operation (optional).

Returns: A list of num_partitions Tensor objects of the same type as data.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dynamic_rnn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dynamic_rnn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dynamic_rnn

Return

Applicative

Origial documentation for Builder.dynamic_rnn

def dynamic_rnn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.dynamic_rnn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.dynamic_rnn

def dynamic_rnn(inputs, cell)

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dynamic_rnn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dynamic_rnn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dynamic_rnn_layer

Return

Applicative

Origial documentation for Builder.dynamic_rnn_layer

def dynamic_rnn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.dynamic_rnn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.dynamic_rnn

def dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None):

Creates a recurrent neural network specified by RNNCell cell.

This function is functionally identical to the function rnn above, but performs fully dynamic unrolling of inputs.

Unlike rnn, the input inputs is not a Python list of Tensors, one for each frame. Instead, inputs may be a single Tensor where the maximum time is either the first or second dimension (see the parameter time_major). Alternatively, it may be a (possibly nested) tuple of Tensors, each of them having matching batch and time dimensions. The corresponding output is either a single Tensor having the same number of time steps and batch size, or a (possibly nested) tuple of such tensors, matching the nested structure of cell.output_size.

The parameter sequence_length is optional and is used to copy-through state and zero-out outputs when past a batch element's sequence length. So it's more for correctness than performance, unlike in rnn().

Args: cell: An instance of RNNCell. inputs: The RNN inputs.

If `time_major == False` (default), this must be a `Tensor` of shape:
  `[batch_size, max_time, ...]`, or a nested tuple of such
  elements.

If `time_major == True`, this must be a `Tensor` of shape:
  `[max_time, batch_size, ...]`, or a nested tuple of such
  elements.

This may also be a (possibly nested) tuple of Tensors satisfying
this property.  The first two dimensions must match across all the inputs,
but otherwise the ranks and other shape components may differ.
In this case, input to `cell` at each time-step will replicate the
structure of these tuples, except for the time dimension (from which the
time is taken).

The input to `cell` at each time step will be a `Tensor` or (possibly
nested) tuple of Tensors each with dimensions `[batch_size, ...]`.

sequence_length: (optional) An int32/int64 vector sized [batch_size]. initial_state: (optional) An initial state for the RNN. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell.state_size. dtype: (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope: VariableScope for the created subgraph; defaults to "RNN".

Returns: A pair (outputs, state) where:

outputs: The RNN output `Tensor`.

  If time_major == False (default), this will be a `Tensor` shaped:
    `[batch_size, max_time, cell.output_size]`.

  If time_major == True, this will be a `Tensor` shaped:
    `[max_time, batch_size, cell.output_size]`.

  Note, if `cell.output_size` is a (possibly nested) tuple of integers
  or `TensorShape` objects, then `outputs` will be a tuple having the
  same structure as `cell.output_size`, containing Tensors having shapes
  corresponding to the shape data in `cell.output_size`.

state: The final state.  If `cell.state_size` is an int, this
  will be shaped `[batch_size, cell.state_size]`.  If it is a
  `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
  If it is a (possibly nested) tuple of ints or `TensorShape`, this will
  be a tuple having the corresponding shapes.

Raises: TypeError: If cell is not an instance of RNNCell. ValueError: If inputs is None or an empty list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dynamic_stitch(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dynamic_stitch, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dynamic_stitch

Return

Applicative

Origial documentation for Builder.dynamic_stitch

def dynamic_stitch(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.dynamic_stitch to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.dynamic_stitch

def dynamic_stitch(indices, data, name=None)

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

For example, if each indices[m] is scalar or vector, we have

# Scalar indices
merged[indices[m], ...] = data[m][...]

# Vector indices
merged[indices[m][i], ...] = data[m][i, ...]

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices)] + constant

Values are merged in order, so if an index appears in both indices[m][i] and indices[n][j] for (m,i) < (n,j) the slice data[n][j] will appear in the merged result.

For example:

indices[0] = 6
indices[1] = [4, 1]
indices[2] = [[5, 2], [0, 3]]
data[0] = [61, 62]
data[1] = [[41, 42], [11, 12]]
data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
          [51, 52], [61, 62]]

Args: indices: A list of at least 1 Tensor objects of type int32. data: A list with the same number of Tensor objects as indices of Tensor objects of the same type. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def dynamic_stitch_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.dynamic_stitch_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.dynamic_stitch_layer

Return

Applicative

Origial documentation for Builder.dynamic_stitch_layer

def dynamic_stitch_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.dynamic_stitch, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.dynamic_stitch

def dynamic_stitch(indices, data, name=None):

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

For example, if each indices[m] is scalar or vector, we have

# Scalar indices
merged[indices[m], ...] = data[m][...]

# Vector indices
merged[indices[m][i], ...] = data[m][i, ...]

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices)] + constant

Values are merged in order, so if an index appears in both indices[m][i] and indices[n][j] for (m,i) < (n,j) the slice data[n][j] will appear in the merged result.

For example:

indices[0] = 6
indices[1] = [4, 1]
indices[2] = [[5, 2], [0, 3]]
data[0] = [61, 62]
data[1] = [[41, 42], [11, 12]]
data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
          [51, 52], [61, 62]]

Args: indices: A list of at least 1 Tensor objects of type int32. data: A list with the same number of Tensor objects as indices of Tensor objects of the same type. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def edit_distance(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.edit_distance, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.edit_distance

Return

Applicative

Origial documentation for Builder.edit_distance

def edit_distance(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.edit_distance to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.edit_distance

def edit_distance(hypothesis, truth, normalize=True, name="edit_distance")

Computes the Levenshtein distance between sequences.

This operation takes variable-length sequences (hypothesis and truth), each provided as a SparseTensor, and computes the Levenshtein distance. You can normalize the edit distance by length of truth by setting normalize to true.

For example, given the following input:

```python

'hypothesis' is a tensor of shape [2, 1] with variable-length values:

(0,0) = ["a"]

(1,0) = ["b"]

hypothesis = tf.SparseTensor( [[0, 0, 0], [1, 0, 0]], ["a", "b"] (2, 1, 1))

'truth' is a tensor of shape [2, 2] with variable-length values:

(0,0) = []

(0,1) = ["a"]

(1,0) = ["b", "c"]

(1,1) = ["a"]

truth = tf.SparseTensor( [[0, 1, 0], [1, 0, 0], [1, 0, 1], [1, 1, 0]] ["a", "b", "c", "a"], (2, 2, 2))

normalize = True ```

This operation would return the following:

```python

'output' is a tensor of shape [2, 2] with edit distances normalized

by 'truth' lengths.

output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis ```

Args: hypothesis: A SparseTensor containing hypothesis sequences. truth: A SparseTensor containing truth sequences. normalize: A bool. If True, normalizes the Levenshtein distance by length of truth. name: A name for the operation (optional).

Returns: A dense Tensor with rank R - 1, where R is the rank of the SparseTensor inputs hypothesis and truth.

Raises: TypeError: If either hypothesis or truth are not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def edit_distance_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.edit_distance_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.edit_distance_layer

Return

Applicative

Origial documentation for Builder.edit_distance_layer

def edit_distance_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.edit_distance, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.edit_distance

def edit_distance(hypothesis, truth, normalize=True, name="edit_distance"):

Computes the Levenshtein distance between sequences.

This operation takes variable-length sequences (hypothesis and truth), each provided as a SparseTensor, and computes the Levenshtein distance. You can normalize the edit distance by length of truth by setting normalize to true.

For example, given the following input:

```python

'hypothesis' is a tensor of shape [2, 1] with variable-length values:

(0,0) = ["a"]

(1,0) = ["b"]

hypothesis = tf.SparseTensor( [[0, 0, 0], [1, 0, 0]], ["a", "b"] (2, 1, 1))

'truth' is a tensor of shape [2, 2] with variable-length values:

(0,0) = []

(0,1) = ["a"]

(1,0) = ["b", "c"]

(1,1) = ["a"]

truth = tf.SparseTensor( [[0, 1, 0], [1, 0, 0], [1, 0, 1], [1, 1, 0]] ["a", "b", "c", "a"], (2, 2, 2))

normalize = True ```

This operation would return the following:

```python

'output' is a tensor of shape [2, 2] with edit distances normalized

by 'truth' lengths.

output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis ```

Args: hypothesis: A SparseTensor containing hypothesis sequences. truth: A SparseTensor containing truth sequences. normalize: A bool. If True, normalizes the Levenshtein distance by length of truth. name: A name for the operation (optional).

Returns: A dense Tensor with rank R - 1, where R is the rank of the SparseTensor inputs hypothesis and truth.

Raises: TypeError: If either hypothesis or truth are not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def einsum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.einsum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.einsum

Return

Applicative

Origial documentation for Builder.einsum

def einsum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.einsum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.einsum

def einsum(axes)

A generalized contraction between tensors of arbitrary dimension.

Like numpy.einsum.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def einsum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.einsum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.einsum_layer

Return

Applicative

Origial documentation for Builder.einsum_layer

def einsum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.einsum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.einsum

def einsum(axes):

A generalized contraction between tensors of arbitrary dimension.

Like numpy.einsum.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def elu(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.elu, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.elu

Return

Applicative

Origial documentation for Builder.elu

def elu(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.elu to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.elu

def elu(features, name=None)

Computes exponential linear: exp(features) - 1 if < 0, features otherwise.

See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def elu_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.elu_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.elu_layer

Return

Applicative

Origial documentation for Builder.elu_layer

def elu_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.elu, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.elu

def elu(features, name=None):

Computes exponential linear: exp(features) - 1 if < 0, features otherwise.

See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def embedding(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.embedding, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.embedding

Return

Applicative

Origial documentation for Builder.embedding

def embedding(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tflearn.layers.embedding_ops.embedding to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tflearn.layers.embedding_ops.embedding

def embedding(incoming, input_dim, output_dim, validate_indices=False, weights_init="truncated_normal", trainable=True, restore=True, reuse=False, scope=None, name="Embedding")

Embedding.

Embedding layer for a sequence of integer ids or floats.

Input: 2-D Tensor [samples, ids].

Output: 3-D Tensor [samples, embedded_ids, features].

Arguments: incoming: Incoming 2-D Tensor. input_dim: list of int. Vocabulary size (number of ids). output_dim: list of int. Embedding size. validate_indices: bool. Whether or not to validate gather indices. weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. trainable: bool. If True, weights will be trainable. restore: bool. If True, this layer weights will be restored when loading a model reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared). scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. name: A name for this layer (optional). Default: 'Embedding'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def embedding_lookup(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.embedding_lookup, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.embedding_lookup

Return

Applicative

Origial documentation for Builder.embedding_lookup

def embedding_lookup(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.embedding_lookup to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.embedding_lookup

def embedding_lookup(params, ids, partition_strategy="mod", name=None, validate_indices=True)

Looks up ids in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in params. It is a generalization of tf.gather(), where params is interpreted as a partition of a larger embedding tensor.

If len(params) > 1, each element id of ids is partitioned between the elements of params according to the partition_strategy. In all strategies, if the id space does not evenly divide the number of partitions, each of the first (max_id + 1) % len(params) partitions will be assigned one more id.

If partition_strategy is "mod", we assign each id to partition p = id % len(params). For instance, 13 ids are split across 5 partitions as: [[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]

If partition_strategy is "div", we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape shape(ids) + shape(params)[1:].

Args: params: A list of tensors with the same type and which can be concatenated along dimension 0. Each Tensor must be appropriately sized for the given partition_strategy. ids: A Tensor with type int32 or int64 containing the ids to be looked up in params. partition_strategy: A string specifying the partitioning strategy, relevant if len(params) > 1. Currently "div" and "mod" are supported. Default is "mod". name: A name for the operation (optional). validate_indices: Whether or not to validate gather indices.

Returns: A Tensor with the same type as the tensors in params.

Raises: ValueError: If params is empty.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def embedding_lookup_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.embedding_lookup_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.embedding_lookup_layer

Return

Applicative

Origial documentation for Builder.embedding_lookup_layer

def embedding_lookup_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.embedding_lookup, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.embedding_lookup

def embedding_lookup(params, ids, partition_strategy="mod", name=None, validate_indices=True):

Looks up ids in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in params. It is a generalization of tf.gather(), where params is interpreted as a partition of a larger embedding tensor.

If len(params) > 1, each element id of ids is partitioned between the elements of params according to the partition_strategy. In all strategies, if the id space does not evenly divide the number of partitions, each of the first (max_id + 1) % len(params) partitions will be assigned one more id.

If partition_strategy is "mod", we assign each id to partition p = id % len(params). For instance, 13 ids are split across 5 partitions as: [[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]

If partition_strategy is "div", we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape shape(ids) + shape(params)[1:].

Args: params: A list of tensors with the same type and which can be concatenated along dimension 0. Each Tensor must be appropriately sized for the given partition_strategy. ids: A Tensor with type int32 or int64 containing the ids to be looked up in params. partition_strategy: A string specifying the partitioning strategy, relevant if len(params) > 1. Currently "div" and "mod" are supported. Default is "mod". name: A name for the operation (optional). validate_indices: Whether or not to validate gather indices.

Returns: A Tensor with the same type as the tensors in params.

Raises: ValueError: If params is empty.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def embedding_lookup_sparse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.embedding_lookup_sparse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.embedding_lookup_sparse

Return

Applicative

Origial documentation for Builder.embedding_lookup_sparse

def embedding_lookup_sparse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.embedding_lookup_sparse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.embedding_lookup_sparse

def embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy="mod", name=None, combiner=None)

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.

Args: params: A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. sp_ids: N x M SparseTensor of int64 ids (typically from FeatureValueToId), where N is typically batch size and M is arbitrary. sp_weights: either a SparseTensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, sp_weights must have exactly the same shape and indices as sp_ids. partition_strategy: A string specifying the partitioning strategy, relevant if len(params) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name: Optional name for the op. combiner: A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.

Returns: A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by sp_ids, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if shape(combined params) = [p0, p1, ..., pm] and shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn] then shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id 0, weight 1.0
[2, 3]: id 1, weight 3.0

with combiner="mean", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = params[0, :] * 1.0 output[2, :] = params[1, :] * 3.0

Raises: TypeError: If sp_ids is not a SparseTensor, or if sp_weights is neither None nor SparseTensor. ValueError: If combiner is not one of {"mean", "sqrtn", "sum"}.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def embedding_lookup_sparse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.embedding_lookup_sparse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.embedding_lookup_sparse_layer

Return

Applicative

Origial documentation for Builder.embedding_lookup_sparse_layer

def embedding_lookup_sparse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.embedding_lookup_sparse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.embedding_lookup_sparse

def embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy="mod", name=None, combiner=None):

Computes embeddings for the given ids and weights.

This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.

It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.

Args: params: A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. sp_ids: N x M SparseTensor of int64 ids (typically from FeatureValueToId), where N is typically batch size and M is arbitrary. sp_weights: either a SparseTensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, sp_weights must have exactly the same shape and indices as sp_ids. partition_strategy: A string specifying the partitioning strategy, relevant if len(params) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name: Optional name for the op. combiner: A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.

Returns: A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by sp_ids, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.

In other words, if shape(combined params) = [p0, p1, ..., pm] and shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn] then shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].

For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are

[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id 0, weight 1.0
[2, 3]: id 1, weight 3.0

with combiner="mean", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = params[0, :] * 1.0 output[2, :] = params[1, :] * 3.0

Raises: TypeError: If sp_ids is not a SparseTensor, or if sp_weights is neither None nor SparseTensor. ValueError: If combiner is not one of {"mean", "sqrtn", "sum"}.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def encode_base64(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.encode_base64, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.encode_base64

Return

Applicative

Origial documentation for Builder.encode_base64

def encode_base64(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.encode_base64 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.encode_base64

def encode_base64(input, pad=None, name=None)

Encode strings into web-safe base64 format.

Refer to the following article for more information on base64 format: en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the end so that the encoded has length multiple of 4. See Padding section of the link above.

Web-safe means that the encoder uses - and _ instead of + and /.

Args: input: A Tensor of type string. Strings to be encoded. pad: An optional bool. Defaults to False. Bool whether padding is applied at the ends. name: A name for the operation (optional).

Returns: A Tensor of type string. Input strings encoded in base64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def encode_base64_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.encode_base64_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.encode_base64_layer

Return

Applicative

Origial documentation for Builder.encode_base64_layer

def encode_base64_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.encode_base64, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.encode_base64

def encode_base64(input, pad=None, name=None):

Encode strings into web-safe base64 format.

Refer to the following article for more information on base64 format: en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the end so that the encoded has length multiple of 4. See Padding section of the link above.

Web-safe means that the encoder uses - and _ instead of + and /.

Args: input: A Tensor of type string. Strings to be encoded. pad: An optional bool. Defaults to False. Bool whether padding is applied at the ends. name: A name for the operation (optional).

Returns: A Tensor of type string. Input strings encoded in base64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ensamble_dropout(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.ensamble_dropout, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.ensamble_dropout

Return

Applicative

Origial documentation for BuilderTree.ensamble_dropout

def ensamble_dropout(tree, keep_prob, seed=None, name=None):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method the same as tensorbuilder.BuilderTree.ensamble_dropout.

Original Documentation for tensorbuilder.BuilderTree.ensamble_dropout

def ensamble_dropout(tree, keep_prob, seed=None, name=None)

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.equal

Return

Applicative

Origial documentation for Builder.equal

def equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.equal

def equal(x, y, name=None)

Returns the truth value of (x == y) element-wise.

NOTE: Equal supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, int16, int32, int64, complex64, quint8, qint8, qint32, string, bool, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.equal_layer

Return

Applicative

Origial documentation for Builder.equal_layer

def equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.equal

def equal(x, y, name=None):

Returns the truth value of (x == y) element-wise.

NOTE: Equal supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, int16, int32, int64, complex64, quint8, qint8, qint32, string, bool, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def erf(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.erf, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.erf

Return

Applicative

Origial documentation for Builder.erf

def erf(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.erf to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.erf

def erf(x, name=None)

Computes the Gauss error function of x element-wise.

Args: x: A Tensor of SparseTensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def erf_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.erf_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.erf_layer

Return

Applicative

Origial documentation for Builder.erf_layer

def erf_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.erf, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.erf

def erf(x, name=None):

Computes the Gauss error function of x element-wise.

Args: x: A Tensor of SparseTensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def erfc(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.erfc, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.erfc

Return

Applicative

Origial documentation for Builder.erfc

def erfc(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.erfc to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.erfc

def erfc(x, name=None)

Computes the complementary error function of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def erfc_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.erfc_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.erfc_layer

Return

Applicative

Origial documentation for Builder.erfc_layer

def erfc_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.erfc, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.erfc

def erfc(x, name=None):

Computes the complementary error function of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def erosion2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.erosion2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.erosion2d

Return

Applicative

Origial documentation for Builder.erosion2d

def erosion2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.erosion2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.erosion2d

def erosion2d(value, kernel, strides, rates, padding, name=None)

Computes the grayscale erosion of 4-D value and 3-D kernel tensors.

The value tensor has shape [batch, in_height, in_width, depth] and the kernel tensor has shape [kernel_height, kernel_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format.

In detail, the grayscale morphological 2-D erosion is given by:

output[b, y, x, c] =
   min_{dy, dx} value[b,
                      strides[1] * y - rates[1] * dy,
                      strides[2] * x - rates[2] * dx,
                      c] -
                kernel[dy, dx, c]

Duality: The erosion of value by the kernel is equal to the negation of the dilation of -value by the reflected kernel.

Args: value: A Tensor. 4-D with shape [batch, in_height, in_width, depth]. kernel: A Tensor. Must have the same type as value. 3-D with shape [kernel_height, kernel_width, depth]. strides: A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional). If not specified "erosion2d" is used.

Returns: A Tensor. Has the same type as value. 4-D with shape [batch, out_height, out_width, depth].

Raises: ValueError: If the value depth does not match kernel' shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def erosion2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.erosion2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.erosion2d_layer

Return

Applicative

Origial documentation for Builder.erosion2d_layer

def erosion2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.erosion2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.erosion2d

def erosion2d(value, kernel, strides, rates, padding, name=None):

Computes the grayscale erosion of 4-D value and 3-D kernel tensors.

The value tensor has shape [batch, in_height, in_width, depth] and the kernel tensor has shape [kernel_height, kernel_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format.

In detail, the grayscale morphological 2-D erosion is given by:

output[b, y, x, c] =
   min_{dy, dx} value[b,
                      strides[1] * y - rates[1] * dy,
                      strides[2] * x - rates[2] * dx,
                      c] -
                kernel[dy, dx, c]

Duality: The erosion of value by the kernel is equal to the negation of the dilation of -value by the reflected kernel.

Args: value: A Tensor. 4-D with shape [batch, in_height, in_width, depth]. kernel: A Tensor. Must have the same type as value. 3-D with shape [kernel_height, kernel_width, depth]. strides: A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional). If not specified "erosion2d" is used.

Returns: A Tensor. Has the same type as value. 4-D with shape [batch, out_height, out_width, depth].

Raises: ValueError: If the value depth does not match kernel' shape, or if padding is other than 'VALID' or 'SAME'.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def exp(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.exp, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.exp

Return

Applicative

Origial documentation for Builder.exp

def exp(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.exp to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.exp

def exp(x, name=None)

Computes exponential of x element-wise. \(y = e^x\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def exp_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.exp_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.exp_layer

Return

Applicative

Origial documentation for Builder.exp_layer

def exp_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.exp, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.exp

def exp(x, name=None):

Computes exponential of x element-wise. \(y = e^x\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def expand_dims(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.expand_dims, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.expand_dims

Return

Applicative

Origial documentation for Builder.expand_dims

def expand_dims(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.expand_dims to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.expand_dims

def expand_dims(input, dim, name=None)

Inserts a dimension of 1 into a tensor's shape.

Given a tensor input, this operation inserts a dimension of 1 at the dimension index dim of input's shape. The dimension index dim starts at zero; if you specify a negative number for dim it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape [height, width, channels], you can make it a batch of 1 image with expand_dims(image, 0), which will make the shape [1, height, width, channels].

Other examples:

```prettyprint

't' is a tensor of shape [2]

shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1]

't2' is a tensor of shape [2, 3, 5]

shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] ```

This operation requires that:

-1-input.dims() <= dim <= input.dims()

This operation is related to squeeze(), which removes dimensions of size 1.

Args: input: A Tensor. dim: A Tensor. Must be one of the following types: int32, int64. 0-D (scalar). Specifies the dimension index at which to expand the shape of input. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Contains the same data as input, but its shape has an additional dimension of size 1 added.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def expand_dims_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.expand_dims_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.expand_dims_layer

Return

Applicative

Origial documentation for Builder.expand_dims_layer

def expand_dims_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.expand_dims, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.expand_dims

def expand_dims(input, dim, name=None):

Inserts a dimension of 1 into a tensor's shape.

Given a tensor input, this operation inserts a dimension of 1 at the dimension index dim of input's shape. The dimension index dim starts at zero; if you specify a negative number for dim it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape [height, width, channels], you can make it a batch of 1 image with expand_dims(image, 0), which will make the shape [1, height, width, channels].

Other examples:

```prettyprint

't' is a tensor of shape [2]

shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1]

't2' is a tensor of shape [2, 3, 5]

shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] ```

This operation requires that:

-1-input.dims() <= dim <= input.dims()

This operation is related to squeeze(), which removes dimensions of size 1.

Args: input: A Tensor. dim: A Tensor. Must be one of the following types: int32, int64. 0-D (scalar). Specifies the dimension index at which to expand the shape of input. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Contains the same data as input, but its shape has an additional dimension of size 1 added.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def extract(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.extract, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.extract

Return

Applicative

Origial documentation for BuilderTree.extract

def extract(tree, fn):

@immutable

Expects a function fn with type list( Tensor ) -> Tensor and applies this function to tensorbuilder.core.builders.BuilderTree.tensors, the resulting Tensor is wrapped in Builder. This function

Parameters

  • fn: a function of type list( Tensor ) -> Tensor.
  • All additional *args and **kwargs are forwarded to fn

Return

  • tensorbuilder.core.builders.Builder

Example

Lets redu the example in tensorbuilder.core.builders.BuilderTree.map_each using extract

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = (
    tb.build(x)
    .branch(lambda x: [
        x.relu_layer(20)
    ,
        x.sigmoid_layer(20)
    ,
        x.tanh_layer(20)
    ])
    .map_each(tf.contrib.layers.fully_connected, 5, activation_fn=None)
    .extract(lambda tensors: tf.add_n(tensors)) #or just .extract(tf.add_n)
    .softmax()
    .tensor()
)

Same example using the DSL

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = (
    x,
    [
        tb.relu_layer(20)
    ,
        tb.sigmoid_layer(20)
    ,
        tb.tanh_layer(20)
    ],
    tb.map_each(tf.contrib.layers.fully_connected, 5, activation_fn=None)
    .extract(lambda tensors: tf.add_n(tensors)) #or just .extract(tf.add_n)
    .softmax()
    .tensor()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def extract_image_patches(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.extract_image_patches, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.extract_image_patches

Return

Applicative

Origial documentation for Builder.extract_image_patches

def extract_image_patches(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.extract_image_patches to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.extract_image_patches

def extract_image_patches(images, ksizes, strides, rates, padding, name=None)

Extract patches from images and put them in the "depth" output dimension.

Args: images: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D Tensor with shape [batch, in_rows, in_cols, depth]. ksizes: A list of ints that has length >= 4. The size of the sliding window for each dimension of images. strides: A list of ints that has length >= 4. 1-D of length 4. How far the centers of two consecutive patches are in the images. Must be: [1, stride_rows, stride_cols, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. Must be: [1, rate_rows, rate_cols, 1]. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1), followed by subsampling them spatially by a factor ofrates. padding: Astringfrom:"SAME", "VALID"`. The type of padding algorithm to use.

We specify the size-related attributes as:

      ksizes = [1, ksize_rows, ksize_cols, 1]
      strides = [1, strides_rows, strides_cols, 1]
      rates = [1, rates_rows, rates_cols, 1]

name: A name for the operation (optional).

Returns: A Tensor. Has the same type as images. 4-D Tensor with shape [batch, out_rows, out_cols, ksize_rows * ksize_cols * depth] containing image patches with size ksize_rows x ksize_cols x depth vectorized in the "depth" dimension.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def extract_image_patches_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.extract_image_patches_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.extract_image_patches_layer

Return

Applicative

Origial documentation for Builder.extract_image_patches_layer

def extract_image_patches_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.extract_image_patches, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.extract_image_patches

def extract_image_patches(images, ksizes, strides, rates, padding, name=None):

Extract patches from images and put them in the "depth" output dimension.

Args: images: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. 4-D Tensor with shape [batch, in_rows, in_cols, depth]. ksizes: A list of ints that has length >= 4. The size of the sliding window for each dimension of images. strides: A list of ints that has length >= 4. 1-D of length 4. How far the centers of two consecutive patches are in the images. Must be: [1, stride_rows, stride_cols, 1]. rates: A list of ints that has length >= 4. 1-D of length 4. Must be: [1, rate_rows, rate_cols, 1]. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1), followed by subsampling them spatially by a factor ofrates. padding: Astringfrom:"SAME", "VALID"`. The type of padding algorithm to use.

We specify the size-related attributes as:

      ksizes = [1, ksize_rows, ksize_cols, 1]
      strides = [1, strides_rows, strides_cols, 1]
      rates = [1, rates_rows, rates_cols, 1]

name: A name for the operation (optional).

Returns: A Tensor. Has the same type as images. 4-D Tensor with shape [batch, out_rows, out_cols, ksize_rows * ksize_cols * depth] containing image patches with size ksize_rows x ksize_cols x depth vectorized in the "depth" dimension.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fft(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fft, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fft

Return

Applicative

Origial documentation for Builder.fft

def fft(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.fft to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.fft

def fft(input, name=None)

Compute the 1-dimensional discrete Fourier Transform over the inner-most

dimension of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most dimension of input is replaced with its 1D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fft2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fft2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fft2d

Return

Applicative

Origial documentation for Builder.fft2d

def fft2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.fft2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.fft2d

def fft2d(input, name=None)

Compute the 2-dimensional discrete Fourier Transform over the inner-most

2 dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 2 dimensions of input are replaced with their 2D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fft2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fft2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fft2d_layer

Return

Applicative

Origial documentation for Builder.fft2d_layer

def fft2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.fft2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.fft2d

def fft2d(input, name=None):

Compute the 2-dimensional discrete Fourier Transform over the inner-most

2 dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 2 dimensions of input are replaced with their 2D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fft3d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fft3d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fft3d

Return

Applicative

Origial documentation for Builder.fft3d

def fft3d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.fft3d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.fft3d

def fft3d(input, name=None)

Compute the 3-dimensional discrete Fourier Transform over the inner-most 3

dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 3 dimensions of input are replaced with their 3D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fft3d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fft3d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fft3d_layer

Return

Applicative

Origial documentation for Builder.fft3d_layer

def fft3d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.fft3d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.fft3d

def fft3d(input, name=None):

Compute the 3-dimensional discrete Fourier Transform over the inner-most 3

dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 3 dimensions of input are replaced with their 3D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fft_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fft_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fft_layer

Return

Applicative

Origial documentation for Builder.fft_layer

def fft_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.fft, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.fft

def fft(input, name=None):

Compute the 1-dimensional discrete Fourier Transform over the inner-most

dimension of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most dimension of input is replaced with its 1D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fill(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fill, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fill

Return

Applicative

Origial documentation for Builder.fill

def fill(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.fill to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.fill

def fill(dims, value, name=None)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape dims and fills it with value.

For example:

```prettyprint

Output tensor has shape [2, 3].

fill([2, 3], 9) ==> [[9, 9, 9] [9, 9, 9]] ```

Args: dims: A Tensor of type int32. 1-D. Represents the shape of the output tensor. value: A Tensor. 0-D (scalar). Value to fill the returned tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fill_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fill_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fill_layer

Return

Applicative

Origial documentation for Builder.fill_layer

def fill_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.fill, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.fill

def fill(dims, value, name=None):

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape dims and fills it with value.

For example:

```prettyprint

Output tensor has shape [2, 3].

fill([2, 3], 9) ==> [[9, 9, 9] [9, 9, 9]] ```

Args: dims: A Tensor of type int32. 1-D. Represents the shape of the output tensor. value: A Tensor. 0-D (scalar). Value to fill the returned tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fixed_size_partitioner(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fixed_size_partitioner, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fixed_size_partitioner

Return

Applicative

Origial documentation for Builder.fixed_size_partitioner

def fixed_size_partitioner(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.fixed_size_partitioner to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.fixed_size_partitioner

def fixed_size_partitioner(num_shards, axis=0)

Partitioner to specify a fixed number of shards along given axis.

Args: num_shards: int, number of shards to partition variable. axis: int, axis to partition on.

Returns: A partition function usable as the partitioner argument to variable_scope, get_variable, and get_partitioned_variable_list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fixed_size_partitioner_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fixed_size_partitioner_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fixed_size_partitioner_layer

Return

Applicative

Origial documentation for Builder.fixed_size_partitioner_layer

def fixed_size_partitioner_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.fixed_size_partitioner, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.fixed_size_partitioner

def fixed_size_partitioner(num_shards, axis=0):

Partitioner to specify a fixed number of shards along given axis.

Args: num_shards: int, number of shards to partition variable. axis: int, axis to partition on.

Returns: A partition function usable as the partitioner argument to variable_scope, get_variable, and get_partitioned_variable_list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fixed_unigram_candidate_sampler(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fixed_unigram_candidate_sampler, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fixed_unigram_candidate_sampler

Return

Applicative

Origial documentation for Builder.fixed_unigram_candidate_sampler

def fixed_unigram_candidate_sampler(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.fixed_unigram_candidate_sampler to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.fixed_unigram_candidate_sampler

def fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file="", distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None)

Samples a set of classes using the provided (fixed) base distribution.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution is read from a file or passed in as an in-memory array. There is also an option to skew the distribution by applying a distortion power to the weights.

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. vocab_file: Each valid line in this file (which should have a CSV-like format) corresponds to a valid word ID. IDs are in sequential order, starting from num_reserved_ids. The last entry in each line is expected to be a value corresponding to the count or relative probability. Exactly one of vocab_file and unigrams needs to be passed to this operation. distortion: The distortion is used to skew the unigram probability distribution. Each weight is first raised to the distortion's power before adding to the internal unigram distribution. As a result, distortion = 1.0 gives regular unigram sampling (as defined by the vocab file), and distortion = 0.0 gives a uniform distribution. num_reserved_ids: Optionally some reserved IDs can be added in the range [0, num_reserved_ids] by the users. One use case is that a special unknown word token is used as ID 0. These IDs will have a sampling probability of 0. num_shards: A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with shard) indicates the number of partitions that are being used in the overall computation. shard: A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with num_shards) indicates the particular partition number of the operation, when partitioning is being used. unigrams: A list of unigram counts or probabilities, one per ID in sequential order. Exactly one of vocab_file and unigrams should be passed to this operation. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fixed_unigram_candidate_sampler_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fixed_unigram_candidate_sampler_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fixed_unigram_candidate_sampler_layer

Return

Applicative

Origial documentation for Builder.fixed_unigram_candidate_sampler_layer

def fixed_unigram_candidate_sampler_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.fixed_unigram_candidate_sampler, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.fixed_unigram_candidate_sampler

def fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file="", distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None):

Samples a set of classes using the provided (fixed) base distribution.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution is read from a file or passed in as an in-memory array. There is also an option to skew the distribution by applying a distortion power to the weights.

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. vocab_file: Each valid line in this file (which should have a CSV-like format) corresponds to a valid word ID. IDs are in sequential order, starting from num_reserved_ids. The last entry in each line is expected to be a value corresponding to the count or relative probability. Exactly one of vocab_file and unigrams needs to be passed to this operation. distortion: The distortion is used to skew the unigram probability distribution. Each weight is first raised to the distortion's power before adding to the internal unigram distribution. As a result, distortion = 1.0 gives regular unigram sampling (as defined by the vocab file), and distortion = 0.0 gives a uniform distribution. num_reserved_ids: Optionally some reserved IDs can be added in the range [0, num_reserved_ids] by the users. One use case is that a special unknown word token is used as ID 0. These IDs will have a sampling probability of 0. num_shards: A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with shard) indicates the number of partitions that are being used in the overall computation. shard: A sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with num_shards) indicates the particular partition number of the operation, when partitioning is being used. unigrams: A list of unigram counts or probabilities, one per ID in sequential order. Exactly one of vocab_file and unigrams should be passed to this operation. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def flatten(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.flatten, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.flatten

Return

Applicative

Origial documentation for Builder.flatten

def flatten(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.contrib.layers.flatten to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.contrib.layers.flatten

def flatten()

Flattens the input while maintaining the batch_size.

Assumes that the first dimension represents the batch.

Args: inputs: a tensor of size [batch_size, ...]. outputs_collections: collection to add the outputs. scope: Optional scope for name_scope.

Returns: a flattened tensor with shape [batch_size, k]. Raises: ValueError: if inputs.shape is wrong.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def floor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.floor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.floor

Return

Applicative

Origial documentation for Builder.floor

def floor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.floor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.floor

def floor(x, name=None)

Returns element-wise largest integer not greater than x.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def floor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.floor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.floor_layer

Return

Applicative

Origial documentation for Builder.floor_layer

def floor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.floor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.floor

def floor(x, name=None):

Returns element-wise largest integer not greater than x.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def floordiv(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.floordiv, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.floordiv

Return

Applicative

Origial documentation for Builder.floordiv

def floordiv(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.floordiv to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.floordiv

def floordiv(x, y, name=None)

Divides x / y elementwise, rounding down for floating point.

The same as tf.div(x,y) for integers, but uses tf.floor(tf.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

Note that for efficiency, floordiv uses C semantics for negative numbers (unlike Python and Numpy).

x and y must have the same type, and the result will have the same type as well.

Args: x: Tensor numerator of real numeric type. y: Tensor denominator of real numeric type. name: A name for the operation (optional).

Returns: x / y rounded down (except possibly towards zero for negative integers).

Raises: TypeError: If the inputs are complex.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def floordiv_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.floordiv_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.floordiv_layer

Return

Applicative

Origial documentation for Builder.floordiv_layer

def floordiv_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.floordiv, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.floordiv

def floordiv(x, y, name=None):

Divides x / y elementwise, rounding down for floating point.

The same as tf.div(x,y) for integers, but uses tf.floor(tf.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

Note that for efficiency, floordiv uses C semantics for negative numbers (unlike Python and Numpy).

x and y must have the same type, and the result will have the same type as well.

Args: x: Tensor numerator of real numeric type. y: Tensor denominator of real numeric type. name: A name for the operation (optional).

Returns: x / y rounded down (except possibly towards zero for negative integers).

Raises: TypeError: If the inputs are complex.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def foldl(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.foldl, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.foldl

Return

Applicative

Origial documentation for Builder.foldl

def foldl(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.foldl to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.foldl

def foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)

foldl on the list of tensors unpacked from elems on dimension 0.

This foldl operator repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape`.

Args: fn: The callable to be performed. elems: A tensor to be unpacked on dimension 0. initializer: (optional) The initial value for the accumulator. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. name: (optional) Name prefix for the returned tensors.

Returns: A tensor resulting from applying fn consecutively to the list of tensors unpacked from elems, from first to last.

Raises: TypeError: if fn is not callable.

Example: python elems = [1, 2, 3, 4, 5, 6] sum = foldl(lambda a, x: a + x, elems) # sum == 21

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def foldl_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.foldl_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.foldl_layer

Return

Applicative

Origial documentation for Builder.foldl_layer

def foldl_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.foldl, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.foldl

def foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None):

foldl on the list of tensors unpacked from elems on dimension 0.

This foldl operator repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape`.

Args: fn: The callable to be performed. elems: A tensor to be unpacked on dimension 0. initializer: (optional) The initial value for the accumulator. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. name: (optional) Name prefix for the returned tensors.

Returns: A tensor resulting from applying fn consecutively to the list of tensors unpacked from elems, from first to last.

Raises: TypeError: if fn is not callable.

Example: python elems = [1, 2, 3, 4, 5, 6] sum = foldl(lambda a, x: a + x, elems) # sum == 21

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def foldr(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.foldr, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.foldr

Return

Applicative

Origial documentation for Builder.foldr

def foldr(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.foldr to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.foldr

def foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)

foldr on the list of tensors unpacked from elems on dimension 0.

This foldr operator repeatedly applies the callable fn to a sequence of elements from last to first. The elements are made of the tensors unpacked from elems. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape.

Args: fn: The callable to be performed. elems: A tensor that is unpacked into a sequence of tensors to apply fn. initializer: (optional) The initial value for the accumulator. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. name: (optional) Name prefix for the returned tensors.

Returns: A tensor resulting from applying fn consecutively to the list of tensors unpacked from elems, from last to first.

Raises: TypeError: if fn is not callable.

Example: python elems = [1, 2, 3, 4, 5, 6] sum = foldr(lambda a, x: a + x, elems) # sum == 21

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def foldr_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.foldr_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.foldr_layer

Return

Applicative

Origial documentation for Builder.foldr_layer

def foldr_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.foldr, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.foldr

def foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None):

foldr on the list of tensors unpacked from elems on dimension 0.

This foldr operator repeatedly applies the callable fn to a sequence of elements from last to first. The elements are made of the tensors unpacked from elems. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape.

Args: fn: The callable to be performed. elems: A tensor that is unpacked into a sequence of tensors to apply fn. initializer: (optional) The initial value for the accumulator. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. name: (optional) Name prefix for the returned tensors.

Returns: A tensor resulting from applying fn consecutively to the list of tensors unpacked from elems, from last to first.

Raises: TypeError: if fn is not callable.

Example: python elems = [1, 2, 3, 4, 5, 6] sum = foldr(lambda a, x: a + x, elems) # sum == 21

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fractional_avg_pool(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fractional_avg_pool, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fractional_avg_pool

Return

Applicative

Origial documentation for Builder.fractional_avg_pool

def fractional_avg_pool(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.fractional_avg_pool to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.fractional_avg_pool

def fractional_avg_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None)

Performs fractional average pooling on the input.

Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.

Args: value: A Tensor. Must be one of the following types: float32, float64, int32, int64. 4-D with shape [batch, height, width, channels]. pooling_ratio: A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. pseudo_random: An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for difference between pseudorandom and random. overlapping: An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:

`index  0  1  2  3  4`

`value  20 5  16 3  7`

If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
The result would be [41/3, 26/3] for fractional avg pooling.

deterministic: An optional bool. Defaults to False. When set to True, a fixed pooling region will be used when iterating over a FractionalAvgPool node in the computation graph. Mainly used in unit test to make FractionalAvgPool deterministic. seed: An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. seed2: An optional int. Defaults to 0. An second seed to avoid seed collision. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: A Tensor. Has the same type as value. output tensor after fractional avg pooling. row_pooling_sequence: A Tensor of type int64. row pooling sequence, needed to calculate gradient. col_pooling_sequence: A Tensor of type int64. column pooling sequence, needed to calculate gradient.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fractional_avg_pool_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fractional_avg_pool_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fractional_avg_pool_layer

Return

Applicative

Origial documentation for Builder.fractional_avg_pool_layer

def fractional_avg_pool_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.fractional_avg_pool, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.fractional_avg_pool

def fractional_avg_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None):

Performs fractional average pooling on the input.

Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.

Args: value: A Tensor. Must be one of the following types: float32, float64, int32, int64. 4-D with shape [batch, height, width, channels]. pooling_ratio: A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. pseudo_random: An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for difference between pseudorandom and random. overlapping: An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:

`index  0  1  2  3  4`

`value  20 5  16 3  7`

If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
The result would be [41/3, 26/3] for fractional avg pooling.

deterministic: An optional bool. Defaults to False. When set to True, a fixed pooling region will be used when iterating over a FractionalAvgPool node in the computation graph. Mainly used in unit test to make FractionalAvgPool deterministic. seed: An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. seed2: An optional int. Defaults to 0. An second seed to avoid seed collision. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: A Tensor. Has the same type as value. output tensor after fractional avg pooling. row_pooling_sequence: A Tensor of type int64. row pooling sequence, needed to calculate gradient. col_pooling_sequence: A Tensor of type int64. column pooling sequence, needed to calculate gradient.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fractional_max_pool(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fractional_max_pool, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fractional_max_pool

Return

Applicative

Origial documentation for Builder.fractional_max_pool

def fractional_max_pool(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.fractional_max_pool to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.fractional_max_pool

def fractional_max_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None)

Performs fractional max pooling on the input.

Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.

The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.

First we define the following:

  1. input_row_length : the number of rows from the input set
  2. output_row_length : which will be smaller than the input
  3. alpha = input_row_length / output_row_length : our reduction ratio
  4. K = floor(alpha)
  5. row_pooling_sequence : this is the result list of pool boundary rows

Then, row_pooling_sequence should satisfy:

  1. a[0] = 0 : the first value of the sequence is 0
  2. a[end] = input_row_length : the last value of the sequence is the size
  3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
  4. length(row_pooling_sequence) = output_row_length+1

For more details on fractional max pooling, see this paper: [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071)

Args: value: A Tensor. Must be one of the following types: float32, float64, int32, int64. 4-D with shape [batch, height, width, channels]. pooling_ratio: A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. pseudo_random: An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for difference between pseudorandom and random. overlapping: An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:

`index  0  1  2  3  4`

`value  20 5  16 3  7`

If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
The result would be [20, 16] for fractional max pooling.

deterministic: An optional bool. Defaults to False. When set to True, a fixed pooling region will be used when iterating over a FractionalMaxPool node in the computation graph. Mainly used in unit test to make FractionalMaxPool deterministic. seed: An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. seed2: An optional int. Defaults to 0. An second seed to avoid seed collision. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: A Tensor. Has the same type as value. output tensor after fractional max pooling. row_pooling_sequence: A Tensor of type int64. row pooling sequence, needed to calculate gradient. col_pooling_sequence: A Tensor of type int64. column pooling sequence, needed to calculate gradient.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fractional_max_pool_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fractional_max_pool_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fractional_max_pool_layer

Return

Applicative

Origial documentation for Builder.fractional_max_pool_layer

def fractional_max_pool_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.fractional_max_pool, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.fractional_max_pool

def fractional_max_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None):

Performs fractional max pooling on the input.

Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.

The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.

First we define the following:

  1. input_row_length : the number of rows from the input set
  2. output_row_length : which will be smaller than the input
  3. alpha = input_row_length / output_row_length : our reduction ratio
  4. K = floor(alpha)
  5. row_pooling_sequence : this is the result list of pool boundary rows

Then, row_pooling_sequence should satisfy:

  1. a[0] = 0 : the first value of the sequence is 0
  2. a[end] = input_row_length : the last value of the sequence is the size
  3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
  4. length(row_pooling_sequence) = output_row_length+1

For more details on fractional max pooling, see this paper: [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071)

Args: value: A Tensor. Must be one of the following types: float32, float64, int32, int64. 4-D with shape [batch, height, width, channels]. pooling_ratio: A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. pseudo_random: An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for difference between pseudorandom and random. overlapping: An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:

`index  0  1  2  3  4`

`value  20 5  16 3  7`

If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
The result would be [20, 16] for fractional max pooling.

deterministic: An optional bool. Defaults to False. When set to True, a fixed pooling region will be used when iterating over a FractionalMaxPool node in the computation graph. Mainly used in unit test to make FractionalMaxPool deterministic. seed: An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. seed2: An optional int. Defaults to 0. An second seed to avoid seed collision. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: A Tensor. Has the same type as value. output tensor after fractional max pooling. row_pooling_sequence: A Tensor of type int64. row pooling sequence, needed to calculate gradient. col_pooling_sequence: A Tensor of type int64. column pooling sequence, needed to calculate gradient.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fully_connected(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fully_connected, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fully_connected

Return

Applicative

Origial documentation for Builder.fully_connected

def fully_connected(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.contrib.layers.fully_connected to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.contrib.layers.fully_connected

def fully_connected()

Adds a fully connected layer.

fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a Tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the hidden units. Finally, if activation_fn is not None, it is applied to the hidden units as well.

Note: that if inputs have a rank greater than 2, then inputs is flattened prior to the initial matrix multiply by weights.

Args: inputs: A tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, depth], [None, None, None, channels]. num_outputs: Integer or long, the number of output units in the layer. activation_fn: activation function, set to None to skip it and maintain a linear activation. normalizer_fn: normalization function to use instead of biases. If normalizer_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. default set to None for no normalizer function normalizer_params: normalization function parameters. weights_initializer: An initializer for the weights. weights_regularizer: Optional regularizer for the weights. biases_initializer: An initializer for the biases. If None skip biases. biases_regularizer: Optional regularizer for the biases. reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. variables_collections: Optional list of collections for all the variables or a dictionary containing a different list of collections per variable. outputs_collections: collection to add the outputs. trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). scope: Optional scope for variable_scope.

Returns: the tensor variable representing the result of the series of operations.

Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fully_connected_from_product(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fully_connected_from_product, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fully_connected_from_product

Return

Applicative

Origial documentation for Builder.fully_connected_from_product

def fully_connected_from_product(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tensorbuilder.fully_connected_from_product to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tensorbuilder.fully_connected_from_product

def fully_connected_from_product()

Adds a fully connected layer. fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a Tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the hidden units. Finally, if activation_fn is not None, it is applied to the hidden units as well. Note: that if inputs have a rank greater than 2, then inputs is flattened prior to the initial matrix multiply by weights. Args: inputs: A tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, depth], [None, None, None, channels]. num_outputs: Integer, the number of output units in the layer. activation_fn: activation function. normalizer_fn: normalization function to use instead of biases. If normalize_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. normalizer_params: normalization function parameters. weights_initializer: An initializer for the weights. weights_regularizer: Optional regularizer for the weights. biases_initializer: An initializer for the biases. If None skip biases. biases_regularizer: Optional regularizer for the biases. reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. variables_collections: Optional list of collections for all the variables or a dictionary containing a different list of collections per variable. outputs_collections: collection to add the outputs. trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). scope: Optional scope for variable_op_scope. Returns: the tensor variable representing the result of the series of operations. Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fused_resize_and_pad_conv2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fused_resize_and_pad_conv2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fused_resize_and_pad_conv2d

Return

Applicative

Origial documentation for Builder.fused_resize_and_pad_conv2d

def fused_resize_and_pad_conv2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.fused_resize_and_pad_conv2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.fused_resize_and_pad_conv2d

def fused_resize_and_pad_conv2d(input, size, paddings, filter, mode, strides, padding, resize_align_corners=None, name=None)

Performs a resize and padding as a preprocess during a convolution.

It's often possible to do spatial transformations more efficiently as part of the packing stage of a convolution, so this op allows for an optimized implementation where these stages are fused together. This prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and defaults to 'NHWC' order. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.

Args: input: A Tensor. Must be one of the following types: half, float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. size: A Tensor of type int32. A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size for the images. paddings: A Tensor of type int32. A two-column matrix specifying the padding sizes. The number of rows must be the same as the rank of input. filter: A Tensor. Must have the same type as input. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. mode: A string from: "REFLECT", "SYMMETRIC". strides: A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. resize_align_corners: An optional bool. Defaults to False. If true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def fused_resize_and_pad_conv2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.fused_resize_and_pad_conv2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.fused_resize_and_pad_conv2d_layer

Return

Applicative

Origial documentation for Builder.fused_resize_and_pad_conv2d_layer

def fused_resize_and_pad_conv2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.fused_resize_and_pad_conv2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.fused_resize_and_pad_conv2d

def fused_resize_and_pad_conv2d(input, size, paddings, filter, mode, strides, padding, resize_align_corners=None, name=None):

Performs a resize and padding as a preprocess during a convolution.

It's often possible to do spatial transformations more efficiently as part of the packing stage of a convolution, so this op allows for an optimized implementation where these stages are fused together. This prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and defaults to 'NHWC' order. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.

Args: input: A Tensor. Must be one of the following types: half, float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. size: A Tensor of type int32. A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size for the images. paddings: A Tensor of type int32. A two-column matrix specifying the padding sizes. The number of rows must be the same as the rank of input. filter: A Tensor. Must have the same type as input. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. mode: A string from: "REFLECT", "SYMMETRIC". strides: A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. Must be in the same order as the dimension specified with format. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. resize_align_corners: An optional bool. Defaults to False. If true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def gather(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.gather, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.gather

Return

Applicative

Origial documentation for Builder.gather

def gather(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.gather to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.gather

def gather(params, indices, validate_indices=None, name=None)

Gather slices from params according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape indices.shape + params.shape[1:] where:

# Scalar indices
output[:, ..., :] = params[indices, :, ... :]

# Vector indices
output[i, :, ..., :] = params[indices[i], :, ... :]

# Higher rank indices
output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

If indices is a permutation and len(indices) == params.shape[0] then this operation will permute params accordingly.

Args: params: A Tensor. indices: A Tensor. Must be one of the following types: int32, int64. validate_indices: An optional bool. Defaults to True. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as params.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def gather_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.gather_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.gather_layer

Return

Applicative

Origial documentation for Builder.gather_layer

def gather_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.gather, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.gather

def gather(params, indices, validate_indices=None, name=None):

Gather slices from params according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape indices.shape + params.shape[1:] where:

# Scalar indices
output[:, ..., :] = params[indices, :, ... :]

# Vector indices
output[i, :, ..., :] = params[indices[i], :, ... :]

# Higher rank indices
output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

If indices is a permutation and len(indices) == params.shape[0] then this operation will permute params accordingly.

Args: params: A Tensor. indices: A Tensor. Must be one of the following types: int32, int64. validate_indices: An optional bool. Defaults to True. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as params.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def gather_nd(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.gather_nd, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.gather_nd

Return

Applicative

Origial documentation for Builder.gather_nd

def gather_nd(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.gather_nd to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.gather_nd

def gather_nd(params, indices, name=None)

Gather values or slices from params according to indices.

params is a Tensor of rank R and indices is a Tensor of rank M.

indices must be integer tensor, containing indices into params. It must be shape [d_0, ..., d_N, R] where 0 < R <= M.

The innermost dimension of indices (with length R) corresponds to indices into elements (if R = M) or slices (if R < M) along the Nth dimension of params.

Produces an output tensor with shape

[d_0, ..., d_{n-1}, params.shape[R], ..., params.shape[M-1]].

Some examples below.

Simple indexing into a matrix:

indices = [[0, 0], [1, 1]]
params = [['a', 'b'], ['c', 'd']]
output = ['a', 'd']

Slice indexing into a matrix:

indices = [[1], [0]]
params = [['a', 'b'], ['c', 'd']]
output = [['c', 'd'], ['a', 'b']]

Indexing into a 3-tensor:

indices = [[1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['a1', 'b1'], ['c1', 'd1']]]


indices = [[0, 1], [1, 0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]


indices = [[0, 0, 1], [1, 0, 1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = ['b0', 'b1']

Batched indexing into a matrix:

indices = [[[0, 0]], [[0, 1]]]
params = [['a', 'b'], ['c', 'd']]
output = [['a'], ['b']]

Batched slice indexing into a matrix:

indices = [[[1]], [[0]]]
params = [['a', 'b'], ['c', 'd']]
output = [[['c', 'd']], [['a', 'b']]]

Batched indexing into a 3-tensor:

indices = [[[1]], [[0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[[['a1', 'b1'], ['c1', 'd1']]],
          [[['a0', 'b0'], ['c0', 'd0']]]]


indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['c0', 'd0'], ['a1', 'b1']],
          [['a0', 'b0'], ['c1', 'd1']]]


indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['b0', 'b1'], ['d0', 'c1']]

Args: params: A Tensor. M-D. The tensor from which to gather values. indices: A Tensor. Must be one of the following types: int32, int64. (N+1)-D. Index tensor having shape [d_0, ..., d_N, R]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as params. (N+M-R)-D. Values from params gathered from indices given by indices.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def gather_nd_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.gather_nd_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.gather_nd_layer

Return

Applicative

Origial documentation for Builder.gather_nd_layer

def gather_nd_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.gather_nd, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.gather_nd

def gather_nd(params, indices, name=None):

Gather values or slices from params according to indices.

params is a Tensor of rank R and indices is a Tensor of rank M.

indices must be integer tensor, containing indices into params. It must be shape [d_0, ..., d_N, R] where 0 < R <= M.

The innermost dimension of indices (with length R) corresponds to indices into elements (if R = M) or slices (if R < M) along the Nth dimension of params.

Produces an output tensor with shape

[d_0, ..., d_{n-1}, params.shape[R], ..., params.shape[M-1]].

Some examples below.

Simple indexing into a matrix:

indices = [[0, 0], [1, 1]]
params = [['a', 'b'], ['c', 'd']]
output = ['a', 'd']

Slice indexing into a matrix:

indices = [[1], [0]]
params = [['a', 'b'], ['c', 'd']]
output = [['c', 'd'], ['a', 'b']]

Indexing into a 3-tensor:

indices = [[1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['a1', 'b1'], ['c1', 'd1']]]


indices = [[0, 1], [1, 0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]


indices = [[0, 0, 1], [1, 0, 1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = ['b0', 'b1']

Batched indexing into a matrix:

indices = [[[0, 0]], [[0, 1]]]
params = [['a', 'b'], ['c', 'd']]
output = [['a'], ['b']]

Batched slice indexing into a matrix:

indices = [[[1]], [[0]]]
params = [['a', 'b'], ['c', 'd']]
output = [[['c', 'd']], [['a', 'b']]]

Batched indexing into a 3-tensor:

indices = [[[1]], [[0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[[['a1', 'b1'], ['c1', 'd1']]],
          [[['a0', 'b0'], ['c0', 'd0']]]]


indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['c0', 'd0'], ['a1', 'b1']],
          [['a0', 'b0'], ['c1', 'd1']]]


indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['b0', 'b1'], ['d0', 'c1']]

Args: params: A Tensor. M-D. The tensor from which to gather values. indices: A Tensor. Must be one of the following types: int32, int64. (N+1)-D. Index tensor having shape [d_0, ..., d_N, R]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as params. (N+M-R)-D. Values from params gathered from indices given by indices.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_collection(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_collection, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_collection

Return

Applicative

Origial documentation for Builder.get_collection

def get_collection(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_collection to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_collection

def get_collection(key, scope=None)

Wrapper for Graph.get_collection() using the default graph.

See Graph.get_collection() for more details.

Args: key: The key for the collection. For example, the GraphKeys class contains many standard names for collections. scope: (Optional.) If supplied, the resulting list is filtered to include only items whose name attribute matches using re.match. Items without a name attribute are never returned if a scope is supplied and the choice or re.match means that a scope without special tokens filters by prefix.

Returns: The list of values in the collection with the given name, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_collection_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_collection_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_collection_layer

Return

Applicative

Origial documentation for Builder.get_collection_layer

def get_collection_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_collection, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_collection

def get_collection(key, scope=None):

Wrapper for Graph.get_collection() using the default graph.

See Graph.get_collection() for more details.

Args: key: The key for the collection. For example, the GraphKeys class contains many standard names for collections. scope: (Optional.) If supplied, the resulting list is filtered to include only items whose name attribute matches using re.match. Items without a name attribute are never returned if a scope is supplied and the choice or re.match means that a scope without special tokens filters by prefix.

Returns: The list of values in the collection with the given name, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_collection_ref(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_collection_ref, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_collection_ref

Return

Applicative

Origial documentation for Builder.get_collection_ref

def get_collection_ref(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_collection_ref to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_collection_ref

def get_collection_ref(key)

Wrapper for Graph.get_collection_ref() using the default graph.

See Graph.get_collection_ref() for more details.

Args: key: The key for the collection. For example, the GraphKeys class contains many standard names for collections.

Returns: The list of values in the collection with the given name, or an empty list if no value has been added to that collection. Note that this returns the collection list itself, which can be modified in place to change the collection.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_collection_ref_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_collection_ref_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_collection_ref_layer

Return

Applicative

Origial documentation for Builder.get_collection_ref_layer

def get_collection_ref_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_collection_ref, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_collection_ref

def get_collection_ref(key):

Wrapper for Graph.get_collection_ref() using the default graph.

See Graph.get_collection_ref() for more details.

Args: key: The key for the collection. For example, the GraphKeys class contains many standard names for collections.

Returns: The list of values in the collection with the given name, or an empty list if no value has been added to that collection. Note that this returns the collection list itself, which can be modified in place to change the collection.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_default_graph(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_default_graph, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_default_graph

Return

Applicative

Origial documentation for Builder.get_default_graph

def get_default_graph(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_default_graph to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_default_graph

def get_default_graph()

Returns the default graph for the current thread.

The returned graph will be the innermost graph on which a Graph.as_default() context has been entered, or a global default graph if none has been explicitly created.

NOTE: The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function.

Returns: The default Graph being used in the current thread.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_default_graph_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_default_graph_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_default_graph_layer

Return

Applicative

Origial documentation for Builder.get_default_graph_layer

def get_default_graph_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_default_graph, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_default_graph

def get_default_graph():

Returns the default graph for the current thread.

The returned graph will be the innermost graph on which a Graph.as_default() context has been entered, or a global default graph if none has been explicitly created.

NOTE: The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function.

Returns: The default Graph being used in the current thread.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_default_session(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_default_session, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_default_session

Return

Applicative

Origial documentation for Builder.get_default_session

def get_default_session(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_default_session to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_default_session

def get_default_session()

Returns the default session for the current thread.

The returned Session will be the innermost session on which a Session or Session.as_default() context has been entered.

NOTE: The default session is a property of the current thread. If you create a new thread, and wish to use the default session in that thread, you must explicitly add a with sess.as_default(): in that thread's function.

Returns: The default Session being used in the current thread.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_default_session_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_default_session_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_default_session_layer

Return

Applicative

Origial documentation for Builder.get_default_session_layer

def get_default_session_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_default_session, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_default_session

def get_default_session():

Returns the default session for the current thread.

The returned Session will be the innermost session on which a Session or Session.as_default() context has been entered.

NOTE: The default session is a property of the current thread. If you create a new thread, and wish to use the default session in that thread, you must explicitly add a with sess.as_default(): in that thread's function.

Returns: The default Session being used in the current thread.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_seed(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_seed, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_seed

Return

Applicative

Origial documentation for Builder.get_seed

def get_seed(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_seed to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_seed

def get_seed(op_seed)

Returns the local seeds an operation should use given an op-specific seed.

Given operation-specific seed, op_seed, this helper function returns two seeds derived from graph-level and op-level seeds. Many random operations internally use the two seeds to allow user to change the seed globally for a graph, or for only specific operations.

For details on how the graph-level seed interacts with op seeds, see set_random_seed.

Args: op_seed: integer.

Returns: A tuple of two integers that should be used for the local seed of this operation.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_seed_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_seed_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_seed_layer

Return

Applicative

Origial documentation for Builder.get_seed_layer

def get_seed_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_seed, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_seed

def get_seed(op_seed):

Returns the local seeds an operation should use given an op-specific seed.

Given operation-specific seed, op_seed, this helper function returns two seeds derived from graph-level and op-level seeds. Many random operations internally use the two seeds to allow user to change the seed globally for a graph, or for only specific operations.

For details on how the graph-level seed interacts with op seeds, see set_random_seed.

Args: op_seed: integer.

Returns: A tuple of two integers that should be used for the local seed of this operation.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_session_handle(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_session_handle, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_session_handle

Return

Applicative

Origial documentation for Builder.get_session_handle

def get_session_handle(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_session_handle to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_session_handle

def get_session_handle(data, name=None)

Return the handle of data.

This is EXPERIMENTAL and subject to change.

Keep data "in-place" in the runtime and create a handle that can be used to retrieve data in a subsequent run().

Combined with get_session_tensor, we can keep a tensor produced in one run call in place, and use it as the input in a future run call.

Args: data: A tensor to be stored in the session. name: Optional name prefix for the return tensor.

Returns: A scalar string tensor representing a unique handle for data.

Raises: TypeError: if data is not a Tensor.

Example:

```python c = tf.mul(a, b) h = tf.get_session_handle(c) h = sess.run(h)

p, a = tf.get_session_tensor(h.handle, tf.float32) b = tf.mul(a, 10) c = sess.run(b, feed_dict={p: h.handle}) ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_session_handle_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_session_handle_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_session_handle_layer

Return

Applicative

Origial documentation for Builder.get_session_handle_layer

def get_session_handle_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_session_handle, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_session_handle

def get_session_handle(data, name=None):

Return the handle of data.

This is EXPERIMENTAL and subject to change.

Keep data "in-place" in the runtime and create a handle that can be used to retrieve data in a subsequent run().

Combined with get_session_tensor, we can keep a tensor produced in one run call in place, and use it as the input in a future run call.

Args: data: A tensor to be stored in the session. name: Optional name prefix for the return tensor.

Returns: A scalar string tensor representing a unique handle for data.

Raises: TypeError: if data is not a Tensor.

Example:

```python c = tf.mul(a, b) h = tf.get_session_handle(c) h = sess.run(h)

p, a = tf.get_session_tensor(h.handle, tf.float32) b = tf.mul(a, 10) c = sess.run(b, feed_dict={p: h.handle}) ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_session_tensor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_session_tensor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_session_tensor

Return

Applicative

Origial documentation for Builder.get_session_tensor

def get_session_tensor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_session_tensor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_session_tensor

def get_session_tensor(handle, dtype, name=None)

Get the tensor of type dtype by feeding a tensor handle.

This is EXPERIMENTAL and subject to change.

Get the value of the tensor from a tensor handle. The tensor is produced in a previous run() and stored in the state of the session.

Args: handle: The string representation of a persistent tensor handle. dtype: The type of the output tensor. name: Optional name prefix for the return tensor.

Returns: A pair of tensors. The first is a placeholder for feeding a tensor handle and the second is the tensor in the session state keyed by the tensor handle.

Example:

```python c = tf.mul(a, b) h = tf.get_session_handle(c) h = sess.run(h)

p, a = tf.get_session_tensor(h.handle, tf.float32) b = tf.mul(a, 10) c = sess.run(b, feed_dict={p: h.handle}) ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_session_tensor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_session_tensor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_session_tensor_layer

Return

Applicative

Origial documentation for Builder.get_session_tensor_layer

def get_session_tensor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_session_tensor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_session_tensor

def get_session_tensor(handle, dtype, name=None):

Get the tensor of type dtype by feeding a tensor handle.

This is EXPERIMENTAL and subject to change.

Get the value of the tensor from a tensor handle. The tensor is produced in a previous run() and stored in the state of the session.

Args: handle: The string representation of a persistent tensor handle. dtype: The type of the output tensor. name: Optional name prefix for the return tensor.

Returns: A pair of tensors. The first is a placeholder for feeding a tensor handle and the second is the tensor in the session state keyed by the tensor handle.

Example:

```python c = tf.mul(a, b) h = tf.get_session_handle(c) h = sess.run(h)

p, a = tf.get_session_tensor(h.handle, tf.float32) b = tf.mul(a, 10) c = sess.run(b, feed_dict={p: h.handle}) ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_variable(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_variable, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_variable

Return

Applicative

Origial documentation for Builder.get_variable

def get_variable(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_variable to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_variable

def get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, custom_getter=None)

Gets an existing variable with these parameters or create a new one.

This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example:

python with tf.variable_scope("foo"): v = tf.get_variable("v", [1]) # v.name == "foo/v:0" w = tf.get_variable("w", [1]) # w.name == "foo/w:0" with tf.variable_scope("foo", reuse=True) v1 = tf.get_variable("v") # The same as v above.

If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a uniform_unit_scaling_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape.

Similarly, if the regularizer is None (the default), the default regularizer passed in the variable scope will be used (if that is None too, then by default no regularization is performed).

If a partitioner is provided, first a sharded Variable is created via _get_partitioned_variable, and the return value is a Tensor composed of the shards concatenated along the partition axis.

Some useful partitioners are available. See, e.g., variable_axis_size_partitioner and min_max_variable_partitioner.

Args: name: The name of the new or existing variable. shape: Shape of the new or existing variable. dtype: Type of the new or existing variable (defaults to DT_FLOAT). initializer: Initializer for the variable if one is created. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. trainable: If True also add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). collections: List of graph collections keys to add the Variable to. Defaults to [GraphKeys.VARIABLES] (see tf.Variable). caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. partitioner: Optional callable that accepts a fully defined TensorShape and dtype of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. custom_getter: Callable that takes as a first argument the true getter, and allows overwriting the internal get_variable method. The signature of custom_getter should match that of this method, but the most future-proof version will allow for changes: def custom_getter(getter, *args, **kwargs). Direct access to all get_variable parameters is also allowed: def custom_getter(getter, name, *args, **kwargs). A simple identity custom getter that simply creates variables with modified names is: python def custom_getter(getter, name, *args, **kwargs): return getter(name + '_suffix', *args, **kwargs)

Returns: The created or existing variable.

Raises: ValueError: when creating a new variable and shape is not declared, when violating reuse during variable creation, or when initializer dtype and dtype don't match. Reuse is set inside variable_scope.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_variable_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_variable_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_variable_layer

Return

Applicative

Origial documentation for Builder.get_variable_layer

def get_variable_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_variable, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_variable

def get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, custom_getter=None):

Gets an existing variable with these parameters or create a new one.

This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example:

python with tf.variable_scope("foo"): v = tf.get_variable("v", [1]) # v.name == "foo/v:0" w = tf.get_variable("w", [1]) # w.name == "foo/w:0" with tf.variable_scope("foo", reuse=True) v1 = tf.get_variable("v") # The same as v above.

If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a uniform_unit_scaling_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape.

Similarly, if the regularizer is None (the default), the default regularizer passed in the variable scope will be used (if that is None too, then by default no regularization is performed).

If a partitioner is provided, first a sharded Variable is created via _get_partitioned_variable, and the return value is a Tensor composed of the shards concatenated along the partition axis.

Some useful partitioners are available. See, e.g., variable_axis_size_partitioner and min_max_variable_partitioner.

Args: name: The name of the new or existing variable. shape: Shape of the new or existing variable. dtype: Type of the new or existing variable (defaults to DT_FLOAT). initializer: Initializer for the variable if one is created. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. trainable: If True also add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). collections: List of graph collections keys to add the Variable to. Defaults to [GraphKeys.VARIABLES] (see tf.Variable). caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. partitioner: Optional callable that accepts a fully defined TensorShape and dtype of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. custom_getter: Callable that takes as a first argument the true getter, and allows overwriting the internal get_variable method. The signature of custom_getter should match that of this method, but the most future-proof version will allow for changes: def custom_getter(getter, *args, **kwargs). Direct access to all get_variable parameters is also allowed: def custom_getter(getter, name, *args, **kwargs). A simple identity custom getter that simply creates variables with modified names is: python def custom_getter(getter, name, *args, **kwargs): return getter(name + '_suffix', *args, **kwargs)

Returns: The created or existing variable.

Raises: ValueError: when creating a new variable and shape is not declared, when violating reuse during variable creation, or when initializer dtype and dtype don't match. Reuse is set inside variable_scope.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_variable_scope(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_variable_scope, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_variable_scope

Return

Applicative

Origial documentation for Builder.get_variable_scope

def get_variable_scope(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.get_variable_scope to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.get_variable_scope

def get_variable_scope()

Returns the current variable scope.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def get_variable_scope_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.get_variable_scope_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.get_variable_scope_layer

Return

Applicative

Origial documentation for Builder.get_variable_scope_layer

def get_variable_scope_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.get_variable_scope, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.get_variable_scope

def get_variable_scope():

Returns the current variable scope.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def global_norm(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.global_norm, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.global_norm

Return

Applicative

Origial documentation for Builder.global_norm

def global_norm(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.global_norm to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.global_norm

def global_norm(t_list, name=None)

Computes the global norm of multiple tensors.

Given a tuple or list of tensors t_list, this operation returns the global norm of the elements in all tensors in t_list. The global norm is computed as:

global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))

Any entries in t_list that are of type None are ignored.

Args: t_list: A tuple or list of mixed Tensors, IndexedSlices, or None. name: A name for the operation (optional).

Returns: A 0-D (scalar) Tensor of type float.

Raises: TypeError: If t_list is not a sequence.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def global_norm_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.global_norm_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.global_norm_layer

Return

Applicative

Origial documentation for Builder.global_norm_layer

def global_norm_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.global_norm, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.global_norm

def global_norm(t_list, name=None):

Computes the global norm of multiple tensors.

Given a tuple or list of tensors t_list, this operation returns the global norm of the elements in all tensors in t_list. The global norm is computed as:

global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))

Any entries in t_list that are of type None are ignored.

Args: t_list: A tuple or list of mixed Tensors, IndexedSlices, or None. name: A name for the operation (optional).

Returns: A 0-D (scalar) Tensor of type float.

Raises: TypeError: If t_list is not a sequence.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def gradients(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.gradients, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.gradients

Return

Applicative

Origial documentation for Builder.gradients

def gradients(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.gradients to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.gradients

def gradients(ys, xs, grad_ys=None, name="gradients", colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)

Constructs symbolic partial derivatives of sum of ys w.r.t. x in xs.

ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys.

gradients() adds ops to the graph to output the partial derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys.

grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys. When grad_ys is None, we fill in a tensor of '1's of the shape of y for each y in ys. A user can provide their own initial grad_ys to compute the derivatives using a different initial gradient for each y (e.g., if one wanted to weight the gradient differently for each value in each y).

Args: ys: A Tensor or list of tensors to be differentiated. xs: A Tensor or list of tensors to be used for differentiation. grad_ys: Optional. A Tensor or list of tensors the same size as ys and holding the gradients computed for each y in ys. name: Optional name to use for grouping all the gradient ops together. defaults to 'gradients'. colocate_gradients_with_ops: If True, try colocating gradients with the corresponding op. gate_gradients: If True, add a tuple around the gradients returned for an operations. This avoids some race conditions. aggregation_method: Specifies the method used to combine gradient terms. Accepted values are constants defined in the class AggregationMethod.

Returns: A list of sum(dy/dx) for each x in xs.

Raises: LookupError: if one of the operations between x and y does not have a registered gradient function. ValueError: if the arguments are invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def gradients_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.gradients_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.gradients_layer

Return

Applicative

Origial documentation for Builder.gradients_layer

def gradients_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.gradients, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.gradients

def gradients(ys, xs, grad_ys=None, name="gradients", colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None):

Constructs symbolic partial derivatives of sum of ys w.r.t. x in xs.

ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys.

gradients() adds ops to the graph to output the partial derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys.

grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys. When grad_ys is None, we fill in a tensor of '1's of the shape of y for each y in ys. A user can provide their own initial grad_ys to compute the derivatives using a different initial gradient for each y (e.g., if one wanted to weight the gradient differently for each value in each y).

Args: ys: A Tensor or list of tensors to be differentiated. xs: A Tensor or list of tensors to be used for differentiation. grad_ys: Optional. A Tensor or list of tensors the same size as ys and holding the gradients computed for each y in ys. name: Optional name to use for grouping all the gradient ops together. defaults to 'gradients'. colocate_gradients_with_ops: If True, try colocating gradients with the corresponding op. gate_gradients: If True, add a tuple around the gradients returned for an operations. This avoids some race conditions. aggregation_method: Specifies the method used to combine gradient terms. Accepted values are constants defined in the class AggregationMethod.

Returns: A list of sum(dy/dx) for each x in xs.

Raises: LookupError: if one of the operations between x and y does not have a registered gradient function. ValueError: if the arguments are invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def greater(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.greater, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.greater

Return

Applicative

Origial documentation for Builder.greater

def greater(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.greater to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.greater

def greater(x, y, name=None)

Returns the truth value of (x > y) element-wise.

NOTE: Greater supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def greater_equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.greater_equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.greater_equal

Return

Applicative

Origial documentation for Builder.greater_equal

def greater_equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.greater_equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.greater_equal

def greater_equal(x, y, name=None)

Returns the truth value of (x >= y) element-wise.

NOTE: GreaterEqual supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def greater_equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.greater_equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.greater_equal_layer

Return

Applicative

Origial documentation for Builder.greater_equal_layer

def greater_equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.greater_equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.greater_equal

def greater_equal(x, y, name=None):

Returns the truth value of (x >= y) element-wise.

NOTE: GreaterEqual supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def greater_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.greater_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.greater_layer

Return

Applicative

Origial documentation for Builder.greater_layer

def greater_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.greater, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.greater

def greater(x, y, name=None):

Returns the truth value of (x > y) element-wise.

NOTE: Greater supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def group(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.group, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.group

Return

Applicative

Origial documentation for Builder.group

def group(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.group to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.group

def group()

Create an op that groups multiple operations.

When this op finishes, all ops in input have finished. This op has no output.

See also tuple and with_dependencies.

Args: inputs: Zero or more tensors to group. *kwargs: Optional parameters to pass when constructing the NodeDef. name: A name for this operation (optional).

Returns: An Operation that executes all its inputs.

Raises: ValueError: If an unknown keyword argument is provided.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def group_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.group_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.group_layer

Return

Applicative

Origial documentation for Builder.group_layer

def group_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.group, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.group

def group():

Create an op that groups multiple operations.

When this op finishes, all ops in input have finished. This op has no output.

See also tuple and with_dependencies.

Args: inputs: Zero or more tensors to group. *kwargs: Optional parameters to pass when constructing the NodeDef. name: A name for this operation (optional).

Returns: An Operation that executes all its inputs.

Raises: ValueError: If an unknown keyword argument is provided.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def histogram_fixed_width(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.histogram_fixed_width, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.histogram_fixed_width

Return

Applicative

Origial documentation for Builder.histogram_fixed_width

def histogram_fixed_width(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.histogram_fixed_width to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.histogram_fixed_width

def histogram_fixed_width(values, value_range, nbins=100, dtype=<dtype: 'int32'>, name=None)

Return histogram of values.

Given the tensor values, this operation returns a rank 1 histogram counting the number of entries in values that fell into every bin. The bins are equal width and determined by the arguments value_range and nbins.

Args: values: Numeric Tensor. value_range: Shape [2] Tensor. new_values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1]. Must be same dtype as new_values. nbins: Scalar int32 Tensor. Number of histogram bins. dtype: dtype for returned histogram. name: A name for this operation (defaults to 'histogram_fixed_width').

Returns: A 1-D Tensor holding histogram of values.

Examples:

```python

Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)

nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]

with tf.default_session() as sess: hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) variables.initialize_all_variables().run() sess.run(hist) => [2, 1, 1, 0, 2] ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def histogram_fixed_width_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.histogram_fixed_width_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.histogram_fixed_width_layer

Return

Applicative

Origial documentation for Builder.histogram_fixed_width_layer

def histogram_fixed_width_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.histogram_fixed_width, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.histogram_fixed_width

def histogram_fixed_width(values, value_range, nbins=100, dtype=<dtype: 'int32'>, name=None):

Return histogram of values.

Given the tensor values, this operation returns a rank 1 histogram counting the number of entries in values that fell into every bin. The bins are equal width and determined by the arguments value_range and nbins.

Args: values: Numeric Tensor. value_range: Shape [2] Tensor. new_values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1]. Must be same dtype as new_values. nbins: Scalar int32 Tensor. Number of histogram bins. dtype: dtype for returned histogram. name: A name for this operation (defaults to 'histogram_fixed_width').

Returns: A 1-D Tensor holding histogram of values.

Examples:

```python

Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)

nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]

with tf.default_session() as sess: hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) variables.initialize_all_variables().run() sess.run(hist) => [2, 1, 1, 0, 2] ```

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def histogram_summary(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.histogram_summary, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.histogram_summary

Return

Applicative

Origial documentation for Builder.histogram_summary

def histogram_summary(builder, tag):

THIS METHOD IS AUTOMATICALLY GENERATED

Same as tf.histogram_summary(tag, values, collections=None, name=None) but the the with the summery tensor as its first parameter.

Return

Builder

Origial documentation for tf.histogram_summary

def histogram_summary(tag, values, collections=None, name=None):

Outputs a Summary protocol buffer with a histogram.

The generated Summary has one summary value containing a histogram for values.

This op reports an InvalidArgument error if any value is not finite.

Args: tag: A string Tensor. 0-D. Tag to use for the summary value. values: A real numeric Tensor. Any shape. Values to use to build the histogram. collections: Optional list of graph collections keys. The new summary op is added to these collections. Defaults to [GraphKeys.SUMMARIES]. name: A name for the operation (optional).

Returns: A scalar Tensor of type string. The serialized Summary protocol buffer.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def identity(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.identity, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.identity

Return

Applicative

Origial documentation for Builder.identity

def identity(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.identity to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.identity

def identity(input, name=None)

Return a tensor with the same shape and contents as the input tensor or value.

Args: input: A Tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def identity_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.identity_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.identity_layer

Return

Applicative

Origial documentation for Builder.identity_layer

def identity_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.identity, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.identity

def identity(input, name=None):

Return a tensor with the same shape and contents as the input tensor or value.

Args: input: A Tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ifft(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ifft, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ifft

Return

Applicative

Origial documentation for Builder.ifft

def ifft(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ifft to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ifft

def ifft(input, name=None)

Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most

dimension of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most dimension of input is replaced with its inverse 1D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ifft2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ifft2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ifft2d

Return

Applicative

Origial documentation for Builder.ifft2d

def ifft2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ifft2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ifft2d

def ifft2d(input, name=None)

Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most

2 dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 2 dimensions of input are replaced with their inverse 2D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ifft2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ifft2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ifft2d_layer

Return

Applicative

Origial documentation for Builder.ifft2d_layer

def ifft2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ifft2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ifft2d

def ifft2d(input, name=None):

Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most

2 dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 2 dimensions of input are replaced with their inverse 2D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ifft3d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ifft3d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ifft3d

Return

Applicative

Origial documentation for Builder.ifft3d

def ifft3d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ifft3d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ifft3d

def ifft3d(input, name=None)

Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most

3 dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 3 dimensions of input are replaced with their inverse 3D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ifft3d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ifft3d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ifft3d_layer

Return

Applicative

Origial documentation for Builder.ifft3d_layer

def ifft3d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ifft3d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ifft3d

def ifft3d(input, name=None):

Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most

3 dimensions of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 3 dimensions of input are replaced with their inverse 3D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ifft_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ifft_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ifft_layer

Return

Applicative

Origial documentation for Builder.ifft_layer

def ifft_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ifft, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ifft

def ifft(input, name=None):

Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most

dimension of input.

Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional).

Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most dimension of input is replaced with its inverse 1D Fourier Transform.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def igamma(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.igamma, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.igamma

Return

Applicative

Origial documentation for Builder.igamma

def igamma(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.igamma to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.igamma

def igamma(a, x, name=None)

Compute the lower regularized incomplete Gamma function Q(a, x).

The lower regularized incomplete Gamma function is defined as:

P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x) where gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt is the lower incomplete Gamma function.

Note, above Q(a, x) (Igammac) is the upper regularized complete Gamma function.

Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def igamma_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.igamma_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.igamma_layer

Return

Applicative

Origial documentation for Builder.igamma_layer

def igamma_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.igamma, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.igamma

def igamma(a, x, name=None):

Compute the lower regularized incomplete Gamma function Q(a, x).

The lower regularized incomplete Gamma function is defined as:

P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x) where gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt is the lower incomplete Gamma function.

Note, above Q(a, x) (Igammac) is the upper regularized complete Gamma function.

Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def igammac(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.igammac, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.igammac

Return

Applicative

Origial documentation for Builder.igammac

def igammac(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.igammac to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.igammac

def igammac(a, x, name=None)

Compute the upper regularized incomplete Gamma function Q(a, x).

The upper regularized incomplete Gamma function is defined as:

Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x) where Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt is the upper incomplete Gama function.

Note, above P(a, x) (Igamma) is the lower regularized complete Gamma function.

Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def igammac_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.igammac_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.igammac_layer

Return

Applicative

Origial documentation for Builder.igammac_layer

def igammac_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.igammac, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.igammac

def igammac(a, x, name=None):

Compute the upper regularized incomplete Gamma function Q(a, x).

The upper regularized incomplete Gamma function is defined as:

Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x) where Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt is the upper incomplete Gama function.

Note, above P(a, x) (Igamma) is the lower regularized complete Gamma function.

Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def imag(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.imag, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.imag

Return

Applicative

Origial documentation for Builder.imag

def imag(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.imag to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.imag

def imag(input, name=None)

Returns the imaginary part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float32 or float64 that is the imaginary part of each element in input. All elements in input must be complex numbers of the form (a + bj), where a is the real part and b is the imaginary part returned by this operation.

For example:

```

tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]

tf.imag(input) ==> [4.75, 5.75] ```

Args: input: A Tensor. Must be one of the following types: complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor of type float32 or float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def imag_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.imag_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.imag_layer

Return

Applicative

Origial documentation for Builder.imag_layer

def imag_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.imag, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.imag

def imag(input, name=None):

Returns the imaginary part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float32 or float64 that is the imaginary part of each element in input. All elements in input must be complex numbers of the form (a + bj), where a is the real part and b is the imaginary part returned by this operation.

For example:

```

tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]

tf.imag(input) ==> [4.75, 5.75] ```

Args: input: A Tensor. Must be one of the following types: complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor of type float32 or float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def image_summary(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.image_summary, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.image_summary

Return

Applicative

Origial documentation for Builder.image_summary

def image_summary(builder, tag):

THIS METHOD IS AUTOMATICALLY GENERATED

Same as tf.image_summary(tag, tensor, max_images=3, collections=None, name=None) but the the with the summery tensor as its first parameter.

Return

Builder

Origial documentation for tf.image_summary

def image_summary(tag, tensor, max_images=3, collections=None, name=None):

Outputs a Summary protocol buffer with images.

The summary has up to max_images summary values containing images. The images are built from tensor which must be 4-D with shape [batch_size, height, width, channels] and where channels can be:

  • 1: tensor is interpreted as Grayscale.
  • 3: tensor is interpreted as RGB.
  • 4: tensor is interpreted as RGBA.

The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range [0, 255]. uint8 values are unchanged. The op uses two different normalization algorithms:

  • If the input values are all positive, they are rescaled so the largest one is 255.

  • If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.

The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values:

  • If max_images is 1, the summary value tag is 'tag/image'.
  • If max_images is greater than 1, the summary value tags are generated sequentially as 'tag/image/0', 'tag/image/1', etc.

Args: tag: A scalar Tensor of type string. Used to build the tag of the summary values. tensor: A 4-D uint8 or float32 Tensor of shape [batch_size, height, width, channels] where channels is 1, 3, or 4. max_images: Max number of batch elements to generate images for. collections: Optional list of ops.GraphKeys. The collections to add the summary to. Defaults to [ops.GraphKeys.SUMMARIES] name: A name for the operation (optional).

Returns: A scalar Tensor of type string. The serialized Summary protocol buffer.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def import_graph_def(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.import_graph_def, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.import_graph_def

Return

Applicative

Origial documentation for Builder.import_graph_def

def import_graph_def(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.import_graph_def to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.import_graph_def

def import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)

Imports the TensorFlow graph in graph_def into the Python Graph.

This function provides a way to import a serialized TensorFlow GraphDef protocol buffer, and extract individual objects in the GraphDef as Tensor and Operation objects. See Graph.as_graph_def() for a way to create a GraphDef proto.

Args: graph_def: A GraphDef proto containing operations to be imported into the default graph. input_map: A dictionary mapping input names (as strings) in graph_def to Tensor objects. The values of the named input tensors in the imported graph will be re-mapped to the respective Tensor values. return_elements: A list of strings containing operation names in graph_def that will be returned as Operation objects; and/or tensor names in graph_def that will be returned as Tensor objects. name: (Optional.) A prefix that will be prepended to the names in graph_def. Defaults to "import". op_dict: (Optional.) A dictionary mapping op type names to OpDef protos. Must contain an OpDef proto for each op type named in graph_def. If omitted, uses the OpDef protos registered in the global registry. producer_op_list: (Optional.) An OpList proto with the (possibly stripped) list of OpDefs used by the producer of the graph. If provided, attrs for ops in graph_def that are not in op_dict that have their default value according to producer_op_list will be removed. This will allow some more GraphDefs produced by later binaries to be accepted by earlier binaries.

Returns: A list of Operation and/or Tensor objects from the imported graph, corresponding to the names in return_elements.

Raises: TypeError: If graph_def is not a GraphDef proto, input_map is not a dictionary mapping strings to Tensor objects, or return_elements is not a list of strings. ValueError: If input_map, or return_elements contains names that do not appear in graph_def, or graph_def is not well-formed (e.g. it refers to an unknown tensor).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def import_graph_def_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.import_graph_def_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.import_graph_def_layer

Return

Applicative

Origial documentation for Builder.import_graph_def_layer

def import_graph_def_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.import_graph_def, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.import_graph_def

def import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None):

Imports the TensorFlow graph in graph_def into the Python Graph.

This function provides a way to import a serialized TensorFlow GraphDef protocol buffer, and extract individual objects in the GraphDef as Tensor and Operation objects. See Graph.as_graph_def() for a way to create a GraphDef proto.

Args: graph_def: A GraphDef proto containing operations to be imported into the default graph. input_map: A dictionary mapping input names (as strings) in graph_def to Tensor objects. The values of the named input tensors in the imported graph will be re-mapped to the respective Tensor values. return_elements: A list of strings containing operation names in graph_def that will be returned as Operation objects; and/or tensor names in graph_def that will be returned as Tensor objects. name: (Optional.) A prefix that will be prepended to the names in graph_def. Defaults to "import". op_dict: (Optional.) A dictionary mapping op type names to OpDef protos. Must contain an OpDef proto for each op type named in graph_def. If omitted, uses the OpDef protos registered in the global registry. producer_op_list: (Optional.) An OpList proto with the (possibly stripped) list of OpDefs used by the producer of the graph. If provided, attrs for ops in graph_def that are not in op_dict that have their default value according to producer_op_list will be removed. This will allow some more GraphDefs produced by later binaries to be accepted by earlier binaries.

Returns: A list of Operation and/or Tensor objects from the imported graph, corresponding to the names in return_elements.

Raises: TypeError: If graph_def is not a GraphDef proto, input_map is not a dictionary mapping strings to Tensor objects, or return_elements is not a list of strings. ValueError: If input_map, or return_elements contains names that do not appear in graph_def, or graph_def is not well-formed (e.g. it refers to an unknown tensor).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def in_top_k(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.in_top_k, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.in_top_k

Return

Applicative

Origial documentation for Builder.in_top_k

def in_top_k(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.in_top_k to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.in_top_k

def in_top_k(predictions, targets, k, name=None)

Says whether the targets are in the top K predictions.

This outputs a batch_size bool array, an entry out[i] is true if the prediction for the target class is among the top k predictions among all predictions for example i. Note that the behavior of InTopK differs from the TopK op in its handling of ties; if multiple classes have the same prediction value and straddle the top-k boundary, all of those classes are considered to be in the top k.

More formally, let

\(predictions_i\) be the predictions for all classes for example i, \(targets_i\) be the target class for example i, \(out_i\) be the output for example i,

$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

Args: predictions: A Tensor of type float32. A batch_size x classes tensor. targets: A Tensor. Must be one of the following types: int32, int64. A batch_size vector of class ids. k: An int. Number of top elements to look at for computing precision. name: A name for the operation (optional).

Returns: A Tensor of type bool. Computed Precision at k as a bool Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def in_top_k_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.in_top_k_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.in_top_k_layer

Return

Applicative

Origial documentation for Builder.in_top_k_layer

def in_top_k_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.in_top_k, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.in_top_k

def in_top_k(predictions, targets, k, name=None):

Says whether the targets are in the top K predictions.

This outputs a batch_size bool array, an entry out[i] is true if the prediction for the target class is among the top k predictions among all predictions for example i. Note that the behavior of InTopK differs from the TopK op in its handling of ties; if multiple classes have the same prediction value and straddle the top-k boundary, all of those classes are considered to be in the top k.

More formally, let

\(predictions_i\) be the predictions for all classes for example i, \(targets_i\) be the target class for example i, \(out_i\) be the output for example i,

$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

Args: predictions: A Tensor of type float32. A batch_size x classes tensor. targets: A Tensor. Must be one of the following types: int32, int64. A batch_size vector of class ids. k: An int. Number of top elements to look at for computing precision. name: A name for the operation (optional).

Returns: A Tensor of type bool. Computed Precision at k as a bool Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_all_tables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_all_tables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_all_tables

Return

Applicative

Origial documentation for Builder.initialize_all_tables

def initialize_all_tables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.initialize_all_tables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.initialize_all_tables

def initialize_all_tables(name="init_all_tables")

Returns an Op that initializes all tables of the default graph.

Args: name: Optional name for the initialization op.

Returns: An Op that initializes all tables. Note that if there are not tables the returned Op is a NoOp.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_all_tables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_all_tables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_all_tables_layer

Return

Applicative

Origial documentation for Builder.initialize_all_tables_layer

def initialize_all_tables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.initialize_all_tables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.initialize_all_tables

def initialize_all_tables(name="init_all_tables"):

Returns an Op that initializes all tables of the default graph.

Args: name: Optional name for the initialization op.

Returns: An Op that initializes all tables. Note that if there are not tables the returned Op is a NoOp.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_all_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_all_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_all_variables

Return

Applicative

Origial documentation for Builder.initialize_all_variables

def initialize_all_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.initialize_all_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.initialize_all_variables

def initialize_all_variables()

Returns an Op that initializes all variables.

This is just a shortcut for initialize_variables(all_variables())

Returns: An Op that initializes all variables in the graph.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_all_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_all_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_all_variables_layer

Return

Applicative

Origial documentation for Builder.initialize_all_variables_layer

def initialize_all_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.initialize_all_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.initialize_all_variables

def initialize_all_variables():

Returns an Op that initializes all variables.

This is just a shortcut for initialize_variables(all_variables())

Returns: An Op that initializes all variables in the graph.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_local_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_local_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_local_variables

Return

Applicative

Origial documentation for Builder.initialize_local_variables

def initialize_local_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.initialize_local_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.initialize_local_variables

def initialize_local_variables()

Returns an Op that initializes all local variables.

This is just a shortcut for initialize_variables(local_variables())

Returns: An Op that initializes all local variables in the graph.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_local_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_local_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_local_variables_layer

Return

Applicative

Origial documentation for Builder.initialize_local_variables_layer

def initialize_local_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.initialize_local_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.initialize_local_variables

def initialize_local_variables():

Returns an Op that initializes all local variables.

This is just a shortcut for initialize_variables(local_variables())

Returns: An Op that initializes all local variables in the graph.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_variables

Return

Applicative

Origial documentation for Builder.initialize_variables

def initialize_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.initialize_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.initialize_variables

def initialize_variables(var_list, name="init")

Returns an Op that initializes a list of variables.

After you launch the graph in a session, you can run the returned Op to initialize all the variables in var_list. This Op runs all the initializers of the variables in var_list in parallel.

Calling initialize_variables() is equivalent to passing the list of initializers to Group().

If var_list is empty, however, the function still returns an Op that can be run. That Op just has no effect.

Args: var_list: List of Variable objects to initialize. name: Optional name for the returned operation.

Returns: An Op that run the initializers of all the specified variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def initialize_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.initialize_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.initialize_variables_layer

Return

Applicative

Origial documentation for Builder.initialize_variables_layer

def initialize_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.initialize_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.initialize_variables

def initialize_variables(var_list, name="init"):

Returns an Op that initializes a list of variables.

After you launch the graph in a session, you can run the returned Op to initialize all the variables in var_list. This Op runs all the initializers of the variables in var_list in parallel.

Calling initialize_variables() is equivalent to passing the list of initializers to Group().

If var_list is empty, however, the function still returns an Op that can be run. That Op just has no effect.

Args: var_list: List of Variable objects to initialize. name: Optional name for the returned operation.

Returns: An Op that run the initializers of all the specified variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def inv(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.inv, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.inv

Return

Applicative

Origial documentation for Builder.inv

def inv(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.inv to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.inv

def inv(x, name=None)

Computes the reciprocal of x element-wise.

I.e., \(y = 1 / x\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def inv_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.inv_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.inv_layer

Return

Applicative

Origial documentation for Builder.inv_layer

def inv_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.inv, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.inv

def inv(x, name=None):

Computes the reciprocal of x element-wise.

I.e., \(y = 1 / x\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def invert_permutation(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.invert_permutation, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.invert_permutation

Return

Applicative

Origial documentation for Builder.invert_permutation

def invert_permutation(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.invert_permutation to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.invert_permutation

def invert_permutation(x, name=None)

Computes the inverse permutation of a tensor.

This operation computes the inverse of an index permutation. It takes a 1-D integer tensor x, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor y and an input tensor x, this operation computes the following:

y[x[i]] = i for i in [0, 1, ..., len(x) - 1]

The values must include 0. There can be no duplicate values or negative values.

For example:

```prettyprint

tensor x is [3, 4, 0, 2, 1]

invert_permutation(x) ==> [2, 4, 3, 0, 1] ```

Args: x: A Tensor. Must be one of the following types: int32, int64. 1-D. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x. 1-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def invert_permutation_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.invert_permutation_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.invert_permutation_layer

Return

Applicative

Origial documentation for Builder.invert_permutation_layer

def invert_permutation_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.invert_permutation, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.invert_permutation

def invert_permutation(x, name=None):

Computes the inverse permutation of a tensor.

This operation computes the inverse of an index permutation. It takes a 1-D integer tensor x, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor y and an input tensor x, this operation computes the following:

y[x[i]] = i for i in [0, 1, ..., len(x) - 1]

The values must include 0. There can be no duplicate values or negative values.

For example:

```prettyprint

tensor x is [3, 4, 0, 2, 1]

invert_permutation(x) ==> [2, 4, 3, 0, 1] ```

Args: x: A Tensor. Must be one of the following types: int32, int64. 1-D. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x. 1-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_finite(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_finite, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_finite

Return

Applicative

Origial documentation for Builder.is_finite

def is_finite(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_finite to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_finite

def is_finite(x, name=None)

Returns which elements of x are finite.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_finite_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_finite_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_finite_layer

Return

Applicative

Origial documentation for Builder.is_finite_layer

def is_finite_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_finite, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_finite

def is_finite(x, name=None):

Returns which elements of x are finite.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_inf(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_inf, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_inf

Return

Applicative

Origial documentation for Builder.is_inf

def is_inf(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_inf to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_inf

def is_inf(x, name=None)

Returns which elements of x are Inf.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_inf_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_inf_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_inf_layer

Return

Applicative

Origial documentation for Builder.is_inf_layer

def is_inf_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_inf, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_inf

def is_inf(x, name=None):

Returns which elements of x are Inf.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_nan(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_nan, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_nan

Return

Applicative

Origial documentation for Builder.is_nan

def is_nan(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_nan to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_nan

def is_nan(x, name=None)

Returns which elements of x are NaN.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_nan_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_nan_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_nan_layer

Return

Applicative

Origial documentation for Builder.is_nan_layer

def is_nan_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_nan, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_nan

def is_nan(x, name=None):

Returns which elements of x are NaN.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_non_decreasing(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_non_decreasing, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_non_decreasing

Return

Applicative

Origial documentation for Builder.is_non_decreasing

def is_non_decreasing(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_non_decreasing to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_non_decreasing

def is_non_decreasing(x, name=None)

Returns True if x is non-decreasing.

Elements of x are compared in row-major order. The tensor [x[0],...] is non-decreasing if for every adjacent pair we have x[i] <= x[i+1]. If x has less than two elements, it is trivially non-decreasing.

See also: is_strictly_increasing

Args: x: Numeric Tensor. name: A name for this operation (optional). Defaults to "is_non_decreasing"

Returns: Boolean Tensor, equal to True iff x is non-decreasing.

Raises: TypeError: if x is not a numeric tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_non_decreasing_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_non_decreasing_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_non_decreasing_layer

Return

Applicative

Origial documentation for Builder.is_non_decreasing_layer

def is_non_decreasing_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_non_decreasing, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_non_decreasing

def is_non_decreasing(x, name=None):

Returns True if x is non-decreasing.

Elements of x are compared in row-major order. The tensor [x[0],...] is non-decreasing if for every adjacent pair we have x[i] <= x[i+1]. If x has less than two elements, it is trivially non-decreasing.

See also: is_strictly_increasing

Args: x: Numeric Tensor. name: A name for this operation (optional). Defaults to "is_non_decreasing"

Returns: Boolean Tensor, equal to True iff x is non-decreasing.

Raises: TypeError: if x is not a numeric tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_numeric_tensor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_numeric_tensor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_numeric_tensor

Return

Applicative

Origial documentation for Builder.is_numeric_tensor

def is_numeric_tensor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_numeric_tensor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_numeric_tensor

def is_numeric_tensor(tensor)

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_numeric_tensor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_numeric_tensor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_numeric_tensor_layer

Return

Applicative

Origial documentation for Builder.is_numeric_tensor_layer

def is_numeric_tensor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_numeric_tensor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_numeric_tensor

def is_numeric_tensor(tensor):

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_strictly_increasing(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_strictly_increasing, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_strictly_increasing

Return

Applicative

Origial documentation for Builder.is_strictly_increasing

def is_strictly_increasing(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_strictly_increasing to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_strictly_increasing

def is_strictly_increasing(x, name=None)

Returns True if x is strictly increasing.

Elements of x are compared in row-major order. The tensor [x[0],...] is strictly increasing if for every adjacent pair we have x[i] < x[i+1]. If x has less than two elements, it is trivially strictly increasing.

See also: is_non_decreasing

Args: x: Numeric Tensor. name: A name for this operation (optional). Defaults to "is_strictly_increasing"

Returns: Boolean Tensor, equal to True iff x is strictly increasing.

Raises: TypeError: if x is not a numeric tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_strictly_increasing_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_strictly_increasing_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_strictly_increasing_layer

Return

Applicative

Origial documentation for Builder.is_strictly_increasing_layer

def is_strictly_increasing_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_strictly_increasing, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_strictly_increasing

def is_strictly_increasing(x, name=None):

Returns True if x is strictly increasing.

Elements of x are compared in row-major order. The tensor [x[0],...] is strictly increasing if for every adjacent pair we have x[i] < x[i+1]. If x has less than two elements, it is trivially strictly increasing.

See also: is_non_decreasing

Args: x: Numeric Tensor. name: A name for this operation (optional). Defaults to "is_strictly_increasing"

Returns: Boolean Tensor, equal to True iff x is strictly increasing.

Raises: TypeError: if x is not a numeric tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_variable_initialized(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_variable_initialized, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_variable_initialized

Return

Applicative

Origial documentation for Builder.is_variable_initialized

def is_variable_initialized(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.is_variable_initialized to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.is_variable_initialized

def is_variable_initialized(variable)

Tests if a variable has been initialized.

Args: variable: A Variable.

Returns: Returns a scalar boolean Tensor, True if the variable has been initialized, False otherwise.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def is_variable_initialized_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.is_variable_initialized_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.is_variable_initialized_layer

Return

Applicative

Origial documentation for Builder.is_variable_initialized_layer

def is_variable_initialized_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.is_variable_initialized, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.is_variable_initialized

def is_variable_initialized(variable):

Tests if a variable has been initialized.

Args: variable: A Variable.

Returns: Returns a scalar boolean Tensor, True if the variable has been initialized, False otherwise.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def l2_loss(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.l2_loss, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.l2_loss

Return

Applicative

Origial documentation for Builder.l2_loss

def l2_loss(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.l2_loss to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.l2_loss

def l2_loss(t, name=None)

L2 Loss.

Computes half the L2 norm of a tensor without the sqrt:

output = sum(t ** 2) / 2

Args: t: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Typically 2-D, but may have any dimensions. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as t. 0-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def l2_loss_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.l2_loss_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.l2_loss_layer

Return

Applicative

Origial documentation for Builder.l2_loss_layer

def l2_loss_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.l2_loss, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.l2_loss

def l2_loss(t, name=None):

L2 Loss.

Computes half the L2 norm of a tensor without the sqrt:

output = sum(t ** 2) / 2

Args: t: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Typically 2-D, but may have any dimensions. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as t. 0-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def l2_normalize(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.l2_normalize, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.l2_normalize

Return

Applicative

Origial documentation for Builder.l2_normalize

def l2_normalize(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.l2_normalize to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.l2_normalize

def l2_normalize(x, dim, epsilon=1e-12, name=None)

Normalizes along dimension dim using an L2 norm.

For a 1-D tensor with dim = 0, computes

output = x / sqrt(max(sum(x**2), epsilon))

For x with more dimensions, independently normalizes each 1-D slice along dimension dim.

Args: x: A Tensor. dim: Dimension along which to normalize. A scalar or a vector of integers. epsilon: A lower bound value for the norm. Will use sqrt(epsilon) as the divisor if norm < sqrt(epsilon). name: A name for this operation (optional).

Returns: A Tensor with the same shape as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def l2_normalize_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.l2_normalize_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.l2_normalize_layer

Return

Applicative

Origial documentation for Builder.l2_normalize_layer

def l2_normalize_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.l2_normalize, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.l2_normalize

def l2_normalize(x, dim, epsilon=1e-12, name=None):

Normalizes along dimension dim using an L2 norm.

For a 1-D tensor with dim = 0, computes

output = x / sqrt(max(sum(x**2), epsilon))

For x with more dimensions, independently normalizes each 1-D slice along dimension dim.

Args: x: A Tensor. dim: Dimension along which to normalize. A scalar or a vector of integers. epsilon: A lower bound value for the norm. Will use sqrt(epsilon) as the divisor if norm < sqrt(epsilon). name: A name for this operation (optional).

Returns: A Tensor with the same shape as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lbeta(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lbeta, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lbeta

Return

Applicative

Origial documentation for Builder.lbeta

def lbeta(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.lbeta to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.lbeta

def lbeta(x, name="lbeta")

Computes ln(|Beta(x)|), reducing along the last dimension.

Given one-dimensional z = [z_0,...,z_{K-1}], we define

Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)

And for n + 1 dimensional x with shape [N1, ..., Nn, K], we define lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|). In other words, the last dimension is treated as the z vector.

Note that if z = [u, v], then Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt, which defines the traditional bivariate beta function.

Args: x: A rank n + 1 Tensor with type float, or double. name: A name for the operation (optional).

Returns: The logarithm of |Beta(x)| reducing along the last dimension.

Raises: ValueError: If x is empty with rank one or less.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lbeta_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lbeta_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lbeta_layer

Return

Applicative

Origial documentation for Builder.lbeta_layer

def lbeta_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.lbeta, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.lbeta

def lbeta(x, name="lbeta"):

Computes ln(|Beta(x)|), reducing along the last dimension.

Given one-dimensional z = [z_0,...,z_{K-1}], we define

Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)

And for n + 1 dimensional x with shape [N1, ..., Nn, K], we define lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|). In other words, the last dimension is treated as the z vector.

Note that if z = [u, v], then Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt, which defines the traditional bivariate beta function.

Args: x: A rank n + 1 Tensor with type float, or double. name: A name for the operation (optional).

Returns: The logarithm of |Beta(x)| reducing along the last dimension.

Raises: ValueError: If x is empty with rank one or less.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def learned_unigram_candidate_sampler(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.learned_unigram_candidate_sampler, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.learned_unigram_candidate_sampler

Return

Applicative

Origial documentation for Builder.learned_unigram_candidate_sampler

def learned_unigram_candidate_sampler(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.learned_unigram_candidate_sampler to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.learned_unigram_candidate_sampler

def learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)

Samples a set of classes from a distribution learned during training.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution for this operation is constructed on the fly during training. It is a unigram distribution over the target classes seen so far during training. Every integer in [0, range_max) begins with a weight of 1, and is incremented by 1 each time it is seen as a target class. The base distribution is not saved to checkpoints, so it is reset when the model is reloaded.

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def learned_unigram_candidate_sampler_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.learned_unigram_candidate_sampler_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.learned_unigram_candidate_sampler_layer

Return

Applicative

Origial documentation for Builder.learned_unigram_candidate_sampler_layer

def learned_unigram_candidate_sampler_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.learned_unigram_candidate_sampler, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.learned_unigram_candidate_sampler

def learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None):

Samples a set of classes from a distribution learned during training.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution for this operation is constructed on the fly during training. It is a unigram distribution over the target classes seen so far during training. Every integer in [0, range_max) begins with a weight of 1, and is incremented by 1 each time it is seen as a target class. The base distribution is not saved to checkpoints, so it is reset when the model is reloaded.

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def less(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.less, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.less

Return

Applicative

Origial documentation for Builder.less

def less(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.less to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.less

def less(x, y, name=None)

Returns the truth value of (x < y) element-wise.

NOTE: Less supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def less_equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.less_equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.less_equal

Return

Applicative

Origial documentation for Builder.less_equal

def less_equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.less_equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.less_equal

def less_equal(x, y, name=None)

Returns the truth value of (x <= y) element-wise.

NOTE: LessEqual supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def less_equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.less_equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.less_equal_layer

Return

Applicative

Origial documentation for Builder.less_equal_layer

def less_equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.less_equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.less_equal

def less_equal(x, y, name=None):

Returns the truth value of (x <= y) element-wise.

NOTE: LessEqual supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def less_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.less_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.less_layer

Return

Applicative

Origial documentation for Builder.less_layer

def less_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.less, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.less

def less(x, y, name=None):

Returns the truth value of (x < y) element-wise.

NOTE: Less supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lgamma(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lgamma, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lgamma

Return

Applicative

Origial documentation for Builder.lgamma

def lgamma(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.lgamma to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.lgamma

def lgamma(x, name=None)

Computes the log of the absolute value of Gamma(x) element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lgamma_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lgamma_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lgamma_layer

Return

Applicative

Origial documentation for Builder.lgamma_layer

def lgamma_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.lgamma, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.lgamma

def lgamma(x, name=None):

Computes the log of the absolute value of Gamma(x) element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lin_space(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lin_space, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lin_space

Return

Applicative

Origial documentation for Builder.lin_space

def lin_space(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.lin_space to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.lin_space

def lin_space(start, stop, num, name=None)

Generates values in an interval.

A sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by stop - start / num - 1, so that the last one is exactly stop.

For example:

tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]

Args: start: A Tensor. Must be one of the following types: float32, float64. First entry in the range. stop: A Tensor. Must have the same type as start. Last entry in the range. num: A Tensor. Must be one of the following types: int32, int64. Number of values to generate. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as start. 1-D. The generated values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lin_space_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lin_space_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lin_space_layer

Return

Applicative

Origial documentation for Builder.lin_space_layer

def lin_space_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.lin_space, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.lin_space

def lin_space(start, stop, num, name=None):

Generates values in an interval.

A sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by stop - start / num - 1, so that the last one is exactly stop.

For example:

tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]

Args: start: A Tensor. Must be one of the following types: float32, float64. First entry in the range. stop: A Tensor. Must have the same type as start. Last entry in the range. num: A Tensor. Must be one of the following types: int32, int64. Number of values to generate. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as start. 1-D. The generated values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def linear_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.linear_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.linear_layer

Return

Applicative

Origial documentation for Builder.linear_layer

def linear_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method the same as tensorbuilder.linear_layer.

Original Documentation for tensorbuilder.linear_layer

def linear_layer(builder, size)

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method the same as tensorbuilder.linear_layer.

Original Documentation for tensorbuilder.linear_layer

def linear_layer(builder, size)

Alias for .fully_connected(size, activation_fn = None, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def linspace_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.linspace_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.linspace_layer

Return

Applicative

Origial documentation for Builder.linspace_layer

def linspace_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.linspace, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.linspace

def lin_space(start, stop, num, name=None):

Generates values in an interval.

A sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by stop - start / num - 1, so that the last one is exactly stop.

For example:

tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]

Args: start: A Tensor. Must be one of the following types: float32, float64. First entry in the range. stop: A Tensor. Must have the same type as start. Last entry in the range. num: A Tensor. Must be one of the following types: int32, int64. Number of values to generate. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as start. 1-D. The generated values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def list_diff(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.list_diff, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.list_diff

Return

Applicative

Origial documentation for Builder.list_diff

def list_diff(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.list_diff to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.list_diff

def list_diff(x, y, out_idx=None, name=None)

Computes the difference between two lists of numbers or strings.

Given a list x and a list y, this operation returns a list out that represents all values that are in x but not in y. The returned list out is sorted in the same order that the numbers appear in x (duplicates are preserved). This operation also returns a list idx that represents the position of each out element in x. In other words:

out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]

For example, given this input:

prettyprint x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5]

This operation would return:

prettyprint out ==> [2, 4, 6] idx ==> [1, 3, 5]

Args: x: A Tensor. 1-D. Values to keep. y: A Tensor. Must have the same type as x. 1-D. Values to remove. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (out, idx). out: A Tensor. Has the same type as x. 1-D. Values present in x but not in y. idx: A Tensor of type out_idx. 1-D. Positions of x values preserved in out.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def list_diff_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.list_diff_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.list_diff_layer

Return

Applicative

Origial documentation for Builder.list_diff_layer

def list_diff_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.list_diff, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.list_diff

def list_diff(x, y, out_idx=None, name=None):

Computes the difference between two lists of numbers or strings.

Given a list x and a list y, this operation returns a list out that represents all values that are in x but not in y. The returned list out is sorted in the same order that the numbers appear in x (duplicates are preserved). This operation also returns a list idx that represents the position of each out element in x. In other words:

out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]

For example, given this input:

prettyprint x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5]

This operation would return:

prettyprint out ==> [2, 4, 6] idx ==> [1, 3, 5]

Args: x: A Tensor. 1-D. Values to keep. y: A Tensor. Must have the same type as x. 1-D. Values to remove. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (out, idx). out: A Tensor. Has the same type as x. 1-D. Values present in x but not in y. idx: A Tensor of type out_idx. 1-D. Positions of x values preserved in out.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def listdiff_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.listdiff_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.listdiff_layer

Return

Applicative

Origial documentation for Builder.listdiff_layer

def listdiff_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.listdiff, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.listdiff

def list_diff(x, y, out_idx=None, name=None):

Computes the difference between two lists of numbers or strings.

Given a list x and a list y, this operation returns a list out that represents all values that are in x but not in y. The returned list out is sorted in the same order that the numbers appear in x (duplicates are preserved). This operation also returns a list idx that represents the position of each out element in x. In other words:

out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]

For example, given this input:

prettyprint x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5]

This operation would return:

prettyprint out ==> [2, 4, 6] idx ==> [1, 3, 5]

Args: x: A Tensor. 1-D. Values to keep. y: A Tensor. Must have the same type as x. 1-D. Values to remove. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (out, idx). out: A Tensor. Has the same type as x. 1-D. Values present in x but not in y. idx: A Tensor of type out_idx. 1-D. Positions of x values preserved in out.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def load_file_system_library(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.load_file_system_library, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.load_file_system_library

Return

Applicative

Origial documentation for Builder.load_file_system_library

def load_file_system_library(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.load_file_system_library to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.load_file_system_library

def load_file_system_library(library_filename)

Loads a TensorFlow plugin, containing file system implementation.

Pass library_filename to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here.

Args: library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.

Returns: None.

Raises: RuntimeError: when unable to load the library.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def load_file_system_library_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.load_file_system_library_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.load_file_system_library_layer

Return

Applicative

Origial documentation for Builder.load_file_system_library_layer

def load_file_system_library_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.load_file_system_library, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.load_file_system_library

def load_file_system_library(library_filename):

Loads a TensorFlow plugin, containing file system implementation.

Pass library_filename to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here.

Args: library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.

Returns: None.

Raises: RuntimeError: when unable to load the library.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def load_op_library(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.load_op_library, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.load_op_library

Return

Applicative

Origial documentation for Builder.load_op_library

def load_op_library(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.load_op_library to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.load_op_library

def load_op_library(library_filename)

Loads a TensorFlow plugin, containing custom ops and kernels.

Pass "library_filename" to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here. When the library is loaded, ops and kernels registered in the library via the REGISTER_* macros are made available in the TensorFlow process. Note that ops with the same name as an existing op are rejected and not registered with the process.

Args: library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.

Returns: A python module containing the Python wrappers for Ops defined in the plugin.

Raises: RuntimeError: when unable to load the library or get the python wrappers.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def load_op_library_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.load_op_library_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.load_op_library_layer

Return

Applicative

Origial documentation for Builder.load_op_library_layer

def load_op_library_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.load_op_library, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.load_op_library

def load_op_library(library_filename):

Loads a TensorFlow plugin, containing custom ops and kernels.

Pass "library_filename" to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here. When the library is loaded, ops and kernels registered in the library via the REGISTER_* macros are made available in the TensorFlow process. Note that ops with the same name as an existing op are rejected and not registered with the process.

Args: library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.

Returns: A python module containing the Python wrappers for Ops defined in the plugin.

Raises: RuntimeError: when unable to load the library or get the python wrappers.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def local_response_normalization_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.local_response_normalization_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.local_response_normalization_layer

Return

Applicative

Origial documentation for Builder.local_response_normalization_layer

def local_response_normalization_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.local_response_normalization, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.local_response_normalization

def lrn(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None):

Local Response Normalization.

The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)] (http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).

Args: input: A Tensor. Must be one of the following types: float32, half. 4-D. depth_radius: An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window. bias: An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0). alpha: An optional float. Defaults to 1. A scale factor, usually positive. beta: An optional float. Defaults to 0.5. An exponent. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def local_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.local_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.local_variables

Return

Applicative

Origial documentation for Builder.local_variables

def local_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.local_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.local_variables

def local_variables()

Returns all variables created with collection=[LOCAL_VARIABLES].

Returns: A list of local Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def local_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.local_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.local_variables_layer

Return

Applicative

Origial documentation for Builder.local_variables_layer

def local_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.local_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.local_variables

def local_variables():

Returns all variables created with collection=[LOCAL_VARIABLES].

Returns: A list of local Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log

Return

Applicative

Origial documentation for Builder.log

def log(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.log to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.log

def log(x, name=None)

Computes natural logarithm of x element-wise.

I.e., \(y = \log_e x\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_layer

Return

Applicative

Origial documentation for Builder.log_layer

def log_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.log, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.log

def log(x, name=None):

Computes natural logarithm of x element-wise.

I.e., \(y = \log_e x\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_poisson_loss(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_poisson_loss, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_poisson_loss

Return

Applicative

Origial documentation for Builder.log_poisson_loss

def log_poisson_loss(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.log_poisson_loss to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.log_poisson_loss

def log_poisson_loss(log_input, targets, compute_full_loss=False, name=None)

Computes log poisson loss given log_input.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let c = log(x) = log_input, z = targets. The log poisson loss is

  -log(exp(-x) * (x^z) / z!)
= -log(exp(-x) * (x^z)) + log(z!)
~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
    [ Note the second term is the Stirling's Approximation for log(z!).
      It is invariant to x and does not affect optimization, though
      important for correct relative loss comparisons. It is only
      computed when compute_full_loss == True. ]
= x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
= exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]

Args: log_input: A Tensor of type float32 or float64. targets: A Tensor of the same type and shape as log_input. compute_full_loss: whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. name: A name for the operation (optional).

Returns: A Tensor of the same shape as log_input with the componentwise logistic losses.

Raises: ValueError: If log_input and targets do not have the same shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_poisson_loss_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_poisson_loss_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_poisson_loss_layer

Return

Applicative

Origial documentation for Builder.log_poisson_loss_layer

def log_poisson_loss_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.log_poisson_loss, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.log_poisson_loss

def log_poisson_loss(log_input, targets, compute_full_loss=False, name=None):

Computes log poisson loss given log_input.

Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.

For brevity, let c = log(x) = log_input, z = targets. The log poisson loss is

  -log(exp(-x) * (x^z) / z!)
= -log(exp(-x) * (x^z)) + log(z!)
~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
    [ Note the second term is the Stirling's Approximation for log(z!).
      It is invariant to x and does not affect optimization, though
      important for correct relative loss comparisons. It is only
      computed when compute_full_loss == True. ]
= x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
= exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]

Args: log_input: A Tensor of type float32 or float64. targets: A Tensor of the same type and shape as log_input. compute_full_loss: whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. name: A name for the operation (optional).

Returns: A Tensor of the same shape as log_input with the componentwise logistic losses.

Raises: ValueError: If log_input and targets do not have the same shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_softmax(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_softmax, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_softmax

Return

Applicative

Origial documentation for Builder.log_softmax

def log_softmax(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.log_softmax to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.log_softmax

def log_softmax(logits, dim=-1, name=None)

Computes log softmax activations.

For each batch i and class j we have

logsoftmax = logits - reduce_sum(exp(logits), dim)

Args: logits: A non-empty Tensor. Must be one of the following types: half, float32, float64. dim: The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as logits. Same shape as logits.

Raises: InvalidArgumentError: if logits is empty or dim is beyond the last dimension of logits.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_softmax_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_softmax_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_softmax_layer

Return

Applicative

Origial documentation for Builder.log_softmax_layer

def log_softmax_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.log_softmax, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.log_softmax

def log_softmax(logits, dim=-1, name=None):

Computes log softmax activations.

For each batch i and class j we have

logsoftmax = logits - reduce_sum(exp(logits), dim)

Args: logits: A non-empty Tensor. Must be one of the following types: half, float32, float64. dim: The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as logits. Same shape as logits.

Raises: InvalidArgumentError: if logits is empty or dim is beyond the last dimension of logits.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_uniform_candidate_sampler(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_uniform_candidate_sampler, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_uniform_candidate_sampler

Return

Applicative

Origial documentation for Builder.log_uniform_candidate_sampler

def log_uniform_candidate_sampler(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.log_uniform_candidate_sampler to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.log_uniform_candidate_sampler

def log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)

Samples a set of classes using a log-uniform (Zipfian) base distribution.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution for this operation is an approximately log-uniform or Zipfian distribution:

P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)

This sampler is useful when the target classes approximately follow such a distribution - for example, if the classes represent words in a lexicon sorted in decreasing order of frequency. If your classes are not ordered by decreasing frequency, do not use this op.

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def log_uniform_candidate_sampler_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.log_uniform_candidate_sampler_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.log_uniform_candidate_sampler_layer

Return

Applicative

Origial documentation for Builder.log_uniform_candidate_sampler_layer

def log_uniform_candidate_sampler_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.log_uniform_candidate_sampler, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.log_uniform_candidate_sampler

def log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None):

Samples a set of classes using a log-uniform (Zipfian) base distribution.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution for this operation is an approximately log-uniform or Zipfian distribution:

P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)

This sampler is useful when the target classes approximately follow such a distribution - for example, if the classes represent words in a lexicon sorted in decreasing order of frequency. If your classes are not ordered by decreasing frequency, do not use this op.

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_and(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_and, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_and

Return

Applicative

Origial documentation for Builder.logical_and

def logical_and(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.logical_and to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.logical_and

def logical_and(x, y, name=None)

Returns the truth value of x AND y element-wise.

NOTE: LogicalAnd supports broadcasting. More about broadcasting here

Args: x: A Tensor of type bool. y: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_and_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_and_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_and_layer

Return

Applicative

Origial documentation for Builder.logical_and_layer

def logical_and_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.logical_and, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.logical_and

def logical_and(x, y, name=None):

Returns the truth value of x AND y element-wise.

NOTE: LogicalAnd supports broadcasting. More about broadcasting here

Args: x: A Tensor of type bool. y: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_not(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_not, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_not

Return

Applicative

Origial documentation for Builder.logical_not

def logical_not(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.logical_not to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.logical_not

def logical_not(x, name=None)

Returns the truth value of NOT x element-wise.

Args: x: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_not_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_not_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_not_layer

Return

Applicative

Origial documentation for Builder.logical_not_layer

def logical_not_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.logical_not, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.logical_not

def logical_not(x, name=None):

Returns the truth value of NOT x element-wise.

Args: x: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_or(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_or, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_or

Return

Applicative

Origial documentation for Builder.logical_or

def logical_or(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.logical_or to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.logical_or

def logical_or(x, y, name=None)

Returns the truth value of x OR y element-wise.

NOTE: LogicalOr supports broadcasting. More about broadcasting here

Args: x: A Tensor of type bool. y: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_or_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_or_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_or_layer

Return

Applicative

Origial documentation for Builder.logical_or_layer

def logical_or_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.logical_or, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.logical_or

def logical_or(x, y, name=None):

Returns the truth value of x OR y element-wise.

NOTE: LogicalOr supports broadcasting. More about broadcasting here

Args: x: A Tensor of type bool. y: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_xor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_xor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_xor

Return

Applicative

Origial documentation for Builder.logical_xor

def logical_xor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.logical_xor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.logical_xor

def logical_xor(x, y, name="LogicalXor")

x ^ y = (x | y) & ~(x & y).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def logical_xor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.logical_xor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.logical_xor_layer

Return

Applicative

Origial documentation for Builder.logical_xor_layer

def logical_xor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.logical_xor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.logical_xor

def logical_xor(x, y, name="LogicalXor"):

x ^ y = (x | y) & ~(x & y).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lrn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lrn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lrn

Return

Applicative

Origial documentation for Builder.lrn

def lrn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.lrn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.lrn

def lrn(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)

Local Response Normalization.

The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)] (http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).

Args: input: A Tensor. Must be one of the following types: float32, half. 4-D. depth_radius: An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window. bias: An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0). alpha: An optional float. Defaults to 1. A scale factor, usually positive. beta: An optional float. Defaults to 0.5. An exponent. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def lrn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.lrn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.lrn_layer

Return

Applicative

Origial documentation for Builder.lrn_layer

def lrn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.lrn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.lrn

def lrn(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None):

Local Response Normalization.

The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)] (http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).

Args: input: A Tensor. Must be one of the following types: float32, half. 4-D. depth_radius: An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window. bias: An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0). alpha: An optional float. Defaults to 1. A scale factor, usually positive. beta: An optional float. Defaults to 0.5. An exponent. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def make_all(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.make_all, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.make_all

Return

Applicative

Origial documentation for Builder.make_all

def make_all(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.make_all to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.make_all

def make_all(module_name, doc_string_modules=None)

Generates __all__ from the docstring of one or more modules.

Usage: make_all(__name__) or make_all(__name__, [sys.modules(__name__), other_module]). The doc string modules must each a docstring, and __all__ will contain all symbols with @@ references, where that symbol currently exists in the module named module_name.

Args: module_name: The name of the module (usually __name__). doc_string_modules: a list of modules from which to take docstring. If None, then a list containing only the module named module_name is used.

Returns: A list suitable for use as __all__.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def make_all_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.make_all_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.make_all_layer

Return

Applicative

Origial documentation for Builder.make_all_layer

def make_all_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.make_all, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.make_all

def make_all(module_name, doc_string_modules=None):

Generates __all__ from the docstring of one or more modules.

Usage: make_all(__name__) or make_all(__name__, [sys.modules(__name__), other_module]). The doc string modules must each a docstring, and __all__ will contain all symbols with @@ references, where that symbol currently exists in the module named module_name.

Args: module_name: The name of the module (usually __name__). doc_string_modules: a list of modules from which to take docstring. If None, then a list containing only the module named module_name is used.

Returns: A list suitable for use as __all__.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def make_template(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.make_template, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.make_template

Return

Applicative

Origial documentation for Builder.make_template

def make_template(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.make_template to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.make_template

def make_template(name_, func_, create_scope_now_=False, unique_name_=None)

Given an arbitrary function, wrap it so that it does variable sharing.

This wraps func_ in a Template and partially evaluates it. Templates are functions that create variables the first time they are called and reuse them thereafter. In order for func_ to be compatible with a Template it must have the following properties:

  • The function should create all trainable variables and any variables that should be reused by calling tf.get_variable. If a trainable variable is created using tf.Variable, then a ValueError will be thrown. Variables that are intended to be locals can be created by specifying tf.Variable(..., trainable=false).
  • The function may use variable scopes and other templates internally to create and reuse variables, but it shouldn't use tf.get_variables to capture variables that are defined outside of the scope of the function.
  • Internal scopes and variable names should not depend on any arguments that are not supplied to make_template. In general you will get a ValueError telling you that you are trying to reuse a variable that doesn't exist if you make a mistake.

In the following example, both z and w will be scaled by the same y. It is important to note that if we didn't assign scalar_name and used a different name for z and w that a ValueError would be thrown because it couldn't reuse the variable.

```python def my_op(x, scalar_name): var1 = tf.get_variable(scalar_name, shape=[], initializer=tf.constant_initializer(1)) return x * var1

scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')

z = scale_by_y(input1) w = scale_by_y(input2) ```

As a safe-guard, the returned function will raise a ValueError after the first call if trainable variables are created by calling tf.Variable.

If all of these are true, then 2 properties are enforced by the template:

  1. Calling the same template multiple times will share all non-local variables.
  2. Two different templates are guaranteed to be unique, unless you reenter the same variable scope as the initial definition of a template and redefine it. An examples of this exception:

```python def my_op(x, scalar_name): var1 = tf.get_variable(scalar_name, shape=[], initializer=tf.constant_initializer(1)) return x * var1

with tf.variable_scope('scope') as vs: scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2)

Creates a template that reuses the variables above.

with tf.variable_scope(vs, reuse=True): scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y') z2 = scale_by_y2(input1) w2 = scale_by_y2(input2) ```

Depending on the value of create_scope_now_, the full variable scope may be captured either at the time of first call or at the time of construction. If this option is set to True, then all Tensors created by repeated calls to the template will have an extra trailing _N+1 to their name, as the first time the scope is entered in the Template constructor no Tensors are created.

Note: name_, func_ and create_scope_now_ have a trailing underscore to reduce the likelihood of collisions with kwargs.

Args: name_: A name for the scope created by this template. If necessary, the name will be made unique by appending _N to the name. func_: The function to wrap. create_scope_now_: Boolean controlling whether the scope should be created when the template is constructed or when the template is called. Default is False, meaning the scope is created when the template is called. unique_name_: When used, it overrides name_ and is not made unique. If a template of the same scope/unique_name already exists and reuse is false, an error is raised. Defaults to None. **kwargs: Keyword arguments to apply to func_.

Returns: A function to encapsulate a set of variables which should be created once and reused. An enclosing scope will created, either where make_template is called, or wherever the result is called, depending on the value of create_scope_now_. Regardless of the value, the first time the template is called it will enter the scope with no reuse, and call func_ to create variables, which are guaranteed to be unique. All subsequent calls will re-enter the scope and reuse those variables.

Raises: ValueError: if the name is None.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def make_template_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.make_template_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.make_template_layer

Return

Applicative

Origial documentation for Builder.make_template_layer

def make_template_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.make_template, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.make_template

def make_template(name_, func_, create_scope_now_=False, unique_name_=None):

Given an arbitrary function, wrap it so that it does variable sharing.

This wraps func_ in a Template and partially evaluates it. Templates are functions that create variables the first time they are called and reuse them thereafter. In order for func_ to be compatible with a Template it must have the following properties:

  • The function should create all trainable variables and any variables that should be reused by calling tf.get_variable. If a trainable variable is created using tf.Variable, then a ValueError will be thrown. Variables that are intended to be locals can be created by specifying tf.Variable(..., trainable=false).
  • The function may use variable scopes and other templates internally to create and reuse variables, but it shouldn't use tf.get_variables to capture variables that are defined outside of the scope of the function.
  • Internal scopes and variable names should not depend on any arguments that are not supplied to make_template. In general you will get a ValueError telling you that you are trying to reuse a variable that doesn't exist if you make a mistake.

In the following example, both z and w will be scaled by the same y. It is important to note that if we didn't assign scalar_name and used a different name for z and w that a ValueError would be thrown because it couldn't reuse the variable.

```python def my_op(x, scalar_name): var1 = tf.get_variable(scalar_name, shape=[], initializer=tf.constant_initializer(1)) return x * var1

scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')

z = scale_by_y(input1) w = scale_by_y(input2) ```

As a safe-guard, the returned function will raise a ValueError after the first call if trainable variables are created by calling tf.Variable.

If all of these are true, then 2 properties are enforced by the template:

  1. Calling the same template multiple times will share all non-local variables.
  2. Two different templates are guaranteed to be unique, unless you reenter the same variable scope as the initial definition of a template and redefine it. An examples of this exception:

```python def my_op(x, scalar_name): var1 = tf.get_variable(scalar_name, shape=[], initializer=tf.constant_initializer(1)) return x * var1

with tf.variable_scope('scope') as vs: scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2)

Creates a template that reuses the variables above.

with tf.variable_scope(vs, reuse=True): scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y') z2 = scale_by_y2(input1) w2 = scale_by_y2(input2) ```

Depending on the value of create_scope_now_, the full variable scope may be captured either at the time of first call or at the time of construction. If this option is set to True, then all Tensors created by repeated calls to the template will have an extra trailing _N+1 to their name, as the first time the scope is entered in the Template constructor no Tensors are created.

Note: name_, func_ and create_scope_now_ have a trailing underscore to reduce the likelihood of collisions with kwargs.

Args: name_: A name for the scope created by this template. If necessary, the name will be made unique by appending _N to the name. func_: The function to wrap. create_scope_now_: Boolean controlling whether the scope should be created when the template is constructed or when the template is called. Default is False, meaning the scope is created when the template is called. unique_name_: When used, it overrides name_ and is not made unique. If a template of the same scope/unique_name already exists and reuse is false, an error is raised. Defaults to None. **kwargs: Keyword arguments to apply to func_.

Returns: A function to encapsulate a set of variables which should be created once and reused. An enclosing scope will created, either where make_template is called, or wherever the result is called, depending on the value of create_scope_now_. Regardless of the value, the first time the template is called it will enter the scope with no reuse, and call func_ to create variables, which are guaranteed to be unique. All subsequent calls will re-enter the scope and reuse those variables.

Raises: ValueError: if the name is None.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def map(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.map, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.map

Return

Applicative

Origial documentation for Builder.map

def map(builder, fn):

@immutable

Let x be Tensor inside a Builder builder and fn be a function from a tensor to a tensor, then builder.map(fn, \*args, **kwargs) computes fn(x, *args, **kwargs) and stores the result inside a Builder. The Builder class comes with a lot of patched methods that help you do things quickly and make the syntax nicer, but if we don't have the method you need just pass the function you want to use to map, or even consider using tensorbuilder.core.builders.Builder.register_map_method.

Parameters

  • fn: a function of type tensor -> tensor.
  • All extra positional and named arguments are forwarded to fn

Return

  • tensorbuilder.core.builders.Builder

Examples

import tensorflow as tf
from tensorflow.contrib import layers
from tensorbuilder import tb

x = tf.placeholder(tf.float32, shape=[None, 40])
keep_prob = tf.placeholder(tf.float32)

h = (
    tb.build(x)
    .map(layers.fully_connected, 100, activation_fn=tf.nn.tanh)
    .map(tf.nn.dropout, keep_prob)
    .map(layers.fully_connected, 30, activation_fn=tf.nn.softmax)
    .tensor()
)

print(h)

Same using the DSL

import tensorflow as tf
from tensorflow.contrib import layers
from tensorbuilder import tb


x = tf.placeholder(tf.float32, shape=[None, 40])
keep_prob = tf.placeholder(tf.float32)

h = tb.pipe(
    x,
    tb.map(layers.fully_connected, 100, activation_fn=tf.nn.tanh)
    .map(tf.nn.dropout, keep_prob)
    .map(layers.fully_connected, 30, activation_fn=tf.nn.softmax)
    .tensor()
)

print(h)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def map_each(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.map_each, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.map_each

Return

Applicative

Origial documentation for BuilderTree.map_each

def map_each(tree, fn):

@immutable

Expects a function fn with type Tensor -> Tensor and applies this function to all leaf Tensors separately, resulting in a new BuilderTree.

Parameters

  • fn: a function of type Tensor -> Tensor.
  • All additional *args and **kwargs are forwarded to fn

Return

  • tensorbuilder.core.builders.BuilderTree

Example

Lets redu the example in tensorbuilder.core.builders.BuilderTree.reduce using map_each to reduce some code

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = (
    tb.build(x)
    .branch(lambda x: [
        x.relu_layer(20)
    ,
        x.sigmoid_layer(20)
    ,
        x.tanh_layer(20)
    ])
    .map_each(tf.contrib.layers.fully_connected, 5, activation_fn=None)
    .reduce(tf.add)
    .softmax()
    .tensor()
)

Remember that this

.map_each(tf.contrib.layers.fully_connected, 5, activation_fn=None)
.reduce(tf.add)
.softmax()

is equivalent to just

.softmax_layer(5)

for BuilderTrees. Same example using the DSL

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = tb.pipe(
    x,
    [
        x.relu_layer(20)
    ,
        x.sigmoid_layer(20)
    ,
        x.tanh_layer(20)
    ],
    tb.map_each(tf.contrib.layers.fully_connected, 5, activation_fn=None)
    .reduce(tf.add)
    .softmax()
    .tensor()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def map_fn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.map_fn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.map_fn

Return

Applicative

Origial documentation for Builder.map_fn

def map_fn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.map_fn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.map_fn

def map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)

map on the list of tensors unpacked from elems on dimension 0.

The simplest version of map repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems. dtype is the data type of the return value of fn. Users must provide dtype if it is different from the data type of elems.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [values.shape[0]] + fn(values[0]).shape.

This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.

Furthermore, fn may emit a different structure than its input. For example, fn may look like: fn = lambda t1: return (t1 + 1, t1 - 1). In this case, the dtype parameter is not optional: dtype must be a type or (possibly nested) tuple of types matching the output of fn.

Args: fn: The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as dtype if one is provided, otherwise it must have the same structure as elems. elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn. dtype: (optional) The output type(s) of fn. If fn returns a structure of Tensors differing from the structure of elems, then dtype is not optional and must have the same structure as the output of fn. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. infer_shape: (optional) False disables tests for consistent output shapes. name: (optional) Name prefix for the returned tensors.

Returns: A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, from first to last.

Raises: TypeError: if fn is not callable or the structure of the output of fn and dtype do not match. ValueError: if the lengths of the output of fn and dtype do not match.

Examples: python elems = np.array([1, 2, 3, 4, 5, 6]) squares = map_fn(lambda x: x * x, elems) # squares == [1, 4, 9, 16, 25, 36]

python elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64) # alternate == [-1, 2, -3]

python elems = np.array([1, 2, 3]) alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) # alternates[0] == [1, 2, 3] # alternates[1] == [-1, -2, -3]

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def map_fn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.map_fn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.map_fn_layer

Return

Applicative

Origial documentation for Builder.map_fn_layer

def map_fn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.map_fn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.map_fn

def map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None):

map on the list of tensors unpacked from elems on dimension 0.

The simplest version of map repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems. dtype is the data type of the return value of fn. Users must provide dtype if it is different from the data type of elems.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [values.shape[0]] + fn(values[0]).shape.

This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.

Furthermore, fn may emit a different structure than its input. For example, fn may look like: fn = lambda t1: return (t1 + 1, t1 - 1). In this case, the dtype parameter is not optional: dtype must be a type or (possibly nested) tuple of types matching the output of fn.

Args: fn: The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as dtype if one is provided, otherwise it must have the same structure as elems. elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn. dtype: (optional) The output type(s) of fn. If fn returns a structure of Tensors differing from the structure of elems, then dtype is not optional and must have the same structure as the output of fn. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. infer_shape: (optional) False disables tests for consistent output shapes. name: (optional) Name prefix for the returned tensors.

Returns: A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, from first to last.

Raises: TypeError: if fn is not callable or the structure of the output of fn and dtype do not match. ValueError: if the lengths of the output of fn and dtype do not match.

Examples: python elems = np.array([1, 2, 3, 4, 5, 6]) squares = map_fn(lambda x: x * x, elems) # squares == [1, 4, 9, 16, 25, 36]

python elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64) # alternate == [-1, 2, -3]

python elems = np.array([1, 2, 3]) alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) # alternates[0] == [1, 2, 3] # alternates[1] == [-1, -2, -3]

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matching_files(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matching_files, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matching_files

Return

Applicative

Origial documentation for Builder.matching_files

def matching_files(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matching_files to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matching_files

def matching_files(pattern, name=None)

Returns the set of files matching a pattern.

Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion.

Args: pattern: A Tensor of type string. A (scalar) shell wildcard pattern. name: A name for the operation (optional).

Returns: A Tensor of type string. A vector of matching filenames.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matching_files_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matching_files_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matching_files_layer

Return

Applicative

Origial documentation for Builder.matching_files_layer

def matching_files_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matching_files, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matching_files

def matching_files(pattern, name=None):

Returns the set of files matching a pattern.

Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion.

Args: pattern: A Tensor of type string. A (scalar) shell wildcard pattern. name: A name for the operation (optional).

Returns: A Tensor of type string. A vector of matching filenames.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matmul(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matmul, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matmul

Return

Applicative

Origial documentation for Builder.matmul

def matmul(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matmul to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matmul

def matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)

Multiplies matrix a by matrix b, producing a * b.

The inputs must be two-dimensional matrices, with matching inner dimensions, possibly after transposition.

Both matrices must be of the same type. The supported types are: float32, float64, int32, complex64.

Either matrix can be transposed on the fly by setting the corresponding flag to True. This is False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default.

For example:

```python

2-D tensor a

a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.] [4. 5. 6.]]

2-D tensor b

b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.] [9. 10.] [11. 12.]] c = tf.matmul(a, b) => [[58 64] [139 154]] ```

Args: a: Tensor of type float32, float64, int32 or complex64. b: Tensor with same type as a. transpose_a: If True, a is transposed before multiplication. transpose_b: If True, b is transposed before multiplication. a_is_sparse: If True, a is treated as a sparse matrix. b_is_sparse: If True, b is treated as a sparse matrix. name: Name for the operation (optional).

Returns: A Tensor of the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matmul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matmul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matmul_layer

Return

Applicative

Origial documentation for Builder.matmul_layer

def matmul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matmul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matmul

def matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None):

Multiplies matrix a by matrix b, producing a * b.

The inputs must be two-dimensional matrices, with matching inner dimensions, possibly after transposition.

Both matrices must be of the same type. The supported types are: float32, float64, int32, complex64.

Either matrix can be transposed on the fly by setting the corresponding flag to True. This is False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default.

For example:

```python

2-D tensor a

a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.] [4. 5. 6.]]

2-D tensor b

b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.] [9. 10.] [11. 12.]] c = tf.matmul(a, b) => [[58 64] [139 154]] ```

Args: a: Tensor of type float32, float64, int32 or complex64. b: Tensor with same type as a. transpose_a: If True, a is transposed before multiplication. transpose_b: If True, b is transposed before multiplication. a_is_sparse: If True, a is treated as a sparse matrix. b_is_sparse: If True, b is treated as a sparse matrix. name: Name for the operation (optional).

Returns: A Tensor of the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_band_part(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_band_part, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_band_part

Return

Applicative

Origial documentation for Builder.matrix_band_part

def matrix_band_part(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_band_part to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_band_part

def matrix_band_part(input, num_lower, num_upper, name=None)

Copy a tensor setting everything outside a central band in each innermost matrix

to zero.

The band part is computed as follows: Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor with the same shape where

band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n].

The indicator function 'in_band(m, n)is one if(num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise.

For example:

```prettyprint

if 'input' is [[ 0, 1, 2, 3]

             [-1,  0,  1, 2]
             [-2, -1,  0, 1]
             [-3, -2, -1, 0]],

tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] [-1, 0, 1, 2] [ 0, -1, 0, 1] [ 0, 0, -1, 0]],

tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] [-1, 0, 1, 0] [-2, -1, 0, 1] [ 0, -2, -1, 0]] ```

Useful special cases:

prettyprint tf.matrix_band_part(input, 0, -1) ==> Upper triangular part. tf.matrix_band_part(input, -1, 0) ==> Lower triangular part. tf.matrix_band_part(input, 0, 0) ==> Diagonal.

Args: input: A Tensor. Rank k tensor. num_lower: A Tensor of type int64. 0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle. num_upper: A Tensor of type int64. 0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Rank k tensor of the same shape as input. The extracted banded tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_band_part_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_band_part_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_band_part_layer

Return

Applicative

Origial documentation for Builder.matrix_band_part_layer

def matrix_band_part_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_band_part, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_band_part

def matrix_band_part(input, num_lower, num_upper, name=None):

Copy a tensor setting everything outside a central band in each innermost matrix

to zero.

The band part is computed as follows: Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor with the same shape where

band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n].

The indicator function 'in_band(m, n)is one if(num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise.

For example:

```prettyprint

if 'input' is [[ 0, 1, 2, 3]

             [-1,  0,  1, 2]
             [-2, -1,  0, 1]
             [-3, -2, -1, 0]],

tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] [-1, 0, 1, 2] [ 0, -1, 0, 1] [ 0, 0, -1, 0]],

tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] [-1, 0, 1, 0] [-2, -1, 0, 1] [ 0, -2, -1, 0]] ```

Useful special cases:

prettyprint tf.matrix_band_part(input, 0, -1) ==> Upper triangular part. tf.matrix_band_part(input, -1, 0) ==> Lower triangular part. tf.matrix_band_part(input, 0, 0) ==> Diagonal.

Args: input: A Tensor. Rank k tensor. num_lower: A Tensor of type int64. 0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle. num_upper: A Tensor of type int64. 0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Rank k tensor of the same shape as input. The extracted banded tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_determinant(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_determinant, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_determinant

Return

Applicative

Origial documentation for Builder.matrix_determinant

def matrix_determinant(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_determinant to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_determinant

def matrix_determinant(input, name=None)

Computes the determinant of one ore more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices [..., :, :].

Args: input: A Tensor. Must be one of the following types: float32, float64. Shape is [..., M, M]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Shape is [...].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_determinant_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_determinant_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_determinant_layer

Return

Applicative

Origial documentation for Builder.matrix_determinant_layer

def matrix_determinant_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_determinant, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_determinant

def matrix_determinant(input, name=None):

Computes the determinant of one ore more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices [..., :, :].

Args: input: A Tensor. Must be one of the following types: float32, float64. Shape is [..., M, M]. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Shape is [...].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_diag(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_diag, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_diag

Return

Applicative

Origial documentation for Builder.matrix_diag

def matrix_diag(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_diag to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_diag

def matrix_diag(diagonal, name=None)

Returns a batched diagonal tensor with a given batched diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has k dimensions [I, J, K, ..., N], then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].

For example:

```prettyprint

'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]], [[5, 0, 0, 0] [0, 6, 0, 0] [0, 0, 7, 0] [0, 0, 0, 8]]]

which has shape (2, 4, 4) ```

Args: diagonal: A Tensor. Rank k, where k >= 1. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as diagonal. Rank k+1, with output.shape = diagonal.shape + [diagonal.shape[-1]].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_diag_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_diag_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_diag_layer

Return

Applicative

Origial documentation for Builder.matrix_diag_layer

def matrix_diag_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_diag, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_diag

def matrix_diag(diagonal, name=None):

Returns a batched diagonal tensor with a given batched diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has k dimensions [I, J, K, ..., N], then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].

For example:

```prettyprint

'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]], [[5, 0, 0, 0] [0, 6, 0, 0] [0, 0, 7, 0] [0, 0, 0, 8]]]

which has shape (2, 4, 4) ```

Args: diagonal: A Tensor. Rank k, where k >= 1. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as diagonal. Rank k+1, with output.shape = diagonal.shape + [diagonal.shape[-1]].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_diag_part(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_diag_part, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_diag_part

Return

Applicative

Origial documentation for Builder.matrix_diag_part

def matrix_diag_part(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_diag_part to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_diag_part

def matrix_diag_part(input, name=None)

Returns the batched diagonal part of a batched tensor.

This operation returns a tensor with the diagonal part of the batched input. The diagonal part is computed as follows:

Assume input has k dimensions [I, J, K, ..., N, N], then the output is a tensor of rank k - 1 with dimensions [I, J, K, ..., N] where:

diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n].

The input must be at least a matrix.

For example:

```prettyprint

'input' is [[[1, 0, 0, 0]

           [0, 2, 0, 0]
           [0, 0, 3, 0]
           [0, 0, 0, 4]],
          [[5, 0, 0, 0]
           [0, 6, 0, 0]
           [0, 0, 7, 0]
           [0, 0, 0, 8]]]

and input.shape = (2, 4, 4)

tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]

which has shape (2, 4) ```

Args: input: A Tensor. Rank k tensor where k >= 2 and the last two dimensions are equal. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The extracted diagonal(s) having shape diagonal.shape = input.shape[:-1].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_diag_part_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_diag_part_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_diag_part_layer

Return

Applicative

Origial documentation for Builder.matrix_diag_part_layer

def matrix_diag_part_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_diag_part, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_diag_part

def matrix_diag_part(input, name=None):

Returns the batched diagonal part of a batched tensor.

This operation returns a tensor with the diagonal part of the batched input. The diagonal part is computed as follows:

Assume input has k dimensions [I, J, K, ..., N, N], then the output is a tensor of rank k - 1 with dimensions [I, J, K, ..., N] where:

diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n].

The input must be at least a matrix.

For example:

```prettyprint

'input' is [[[1, 0, 0, 0]

           [0, 2, 0, 0]
           [0, 0, 3, 0]
           [0, 0, 0, 4]],
          [[5, 0, 0, 0]
           [0, 6, 0, 0]
           [0, 0, 7, 0]
           [0, 0, 0, 8]]]

and input.shape = (2, 4, 4)

tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]

which has shape (2, 4) ```

Args: input: A Tensor. Rank k tensor where k >= 2 and the last two dimensions are equal. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The extracted diagonal(s) having shape diagonal.shape = input.shape[:-1].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_inverse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_inverse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_inverse

Return

Applicative

Origial documentation for Builder.matrix_inverse

def matrix_inverse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_inverse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_inverse

def matrix_inverse(input, adjoint=None, name=None)

Computes the inverse of one or more square invertible matrices or their

adjoints (conjugate transposes).

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices [..., :, :].

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Args: input: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. adjoint: An optional bool. Defaults to False. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Shape is [..., M, M].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_inverse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_inverse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_inverse_layer

Return

Applicative

Origial documentation for Builder.matrix_inverse_layer

def matrix_inverse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_inverse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_inverse

def matrix_inverse(input, adjoint=None, name=None):

Computes the inverse of one or more square invertible matrices or their

adjoints (conjugate transposes).

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices [..., :, :].

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Args: input: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. adjoint: An optional bool. Defaults to False. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Shape is [..., M, M].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_set_diag(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_set_diag, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_set_diag

Return

Applicative

Origial documentation for Builder.matrix_set_diag

def matrix_set_diag(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_set_diag to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_set_diag

def matrix_set_diag(input, diagonal, name=None)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the diagonals of the innermost matrices. These will be overwritten by the values in diagonal. The batched matrices must be square.

The output is computed as follows:

Assume input has k+1 dimensions [I, J, K, ..., N, N] and diagonal has k dimensions [I, J, K, ..., N]. Then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

  • output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n] for m == n.
  • output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n] for m != n.

Args: input: A Tensor. Rank k+1, where k >= 1. diagonal: A Tensor. Must have the same type as input. Rank k, where k >= 1. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Rank k+1, with output.shape = input.shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_set_diag_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_set_diag_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_set_diag_layer

Return

Applicative

Origial documentation for Builder.matrix_set_diag_layer

def matrix_set_diag_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_set_diag, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_set_diag

def matrix_set_diag(input, diagonal, name=None):

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the diagonals of the innermost matrices. These will be overwritten by the values in diagonal. The batched matrices must be square.

The output is computed as follows:

Assume input has k+1 dimensions [I, J, K, ..., N, N] and diagonal has k dimensions [I, J, K, ..., N]. Then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

  • output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n] for m == n.
  • output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n] for m != n.

Args: input: A Tensor. Rank k+1, where k >= 1. diagonal: A Tensor. Must have the same type as input. Rank k, where k >= 1. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Rank k+1, with output.shape = input.shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_solve(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_solve, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_solve

Return

Applicative

Origial documentation for Builder.matrix_solve

def matrix_solve(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_solve to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_solve

def matrix_solve(matrix, rhs, adjoint=None, name=None)

Solves systems of linear equations.

Matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. Rhs is a tensor of shape [..., M, K]. The output is a tensor shape [..., M, K]. If adjoint is False then each output matrix satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is True then each output matrix satisfies adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :].

Args: matrix: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. rhs: A Tensor. Must have the same type as matrix. Shape is [..., M, K]. adjoint: An optional bool. Defaults to False. Boolean indicating whether to solve with matrix or its (block-wise) adjoint. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as matrix. Shape is [..., M, K].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_solve_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_solve_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_solve_layer

Return

Applicative

Origial documentation for Builder.matrix_solve_layer

def matrix_solve_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_solve, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_solve

def matrix_solve(matrix, rhs, adjoint=None, name=None):

Solves systems of linear equations.

Matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. Rhs is a tensor of shape [..., M, K]. The output is a tensor shape [..., M, K]. If adjoint is False then each output matrix satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is True then each output matrix satisfies adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :].

Args: matrix: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. rhs: A Tensor. Must have the same type as matrix. Shape is [..., M, K]. adjoint: An optional bool. Defaults to False. Boolean indicating whether to solve with matrix or its (block-wise) adjoint. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as matrix. Shape is [..., M, K].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_solve_ls(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_solve_ls, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_solve_ls

Return

Applicative

Origial documentation for Builder.matrix_solve_ls

def matrix_solve_ls(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_solve_ls to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_solve_ls

def matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)

Solves one or more linear least-squares problems.

matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensions form M-by-N matrices. Rhs is a tensor of shape [..., M, K] whose inner-most 2 dimensions form M-by-K matrices. The computed output is a Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by-K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

Below we will use the following notation for each pair of matrix and right-hand sides in the batch:

matrix=\(A \in \Re^{m \times n}\), rhs=\(B \in \Re^{m \times k}\), output=\(X \in \Re^{n \times k}\), l2_regularizer=\(\lambda\).

If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||A Z - B||_F^2 + \lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as \(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z||F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\) is sufficiently large.

If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.

Args: matrix: Tensor of shape [..., M, N]. rhs: Tensor of shape [..., M, K]. l2_regularizer: 0-D double Tensor. Ignored if fast=False. fast: bool. Defaults to True. name: string, optional name of the operation.

Returns: output: Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by-K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_solve_ls_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_solve_ls_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_solve_ls_layer

Return

Applicative

Origial documentation for Builder.matrix_solve_ls_layer

def matrix_solve_ls_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_solve_ls, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_solve_ls

def matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None):

Solves one or more linear least-squares problems.

matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensions form M-by-N matrices. Rhs is a tensor of shape [..., M, K] whose inner-most 2 dimensions form M-by-K matrices. The computed output is a Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by-K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

Below we will use the following notation for each pair of matrix and right-hand sides in the batch:

matrix=\(A \in \Re^{m \times n}\), rhs=\(B \in \Re^{m \times k}\), output=\(X \in \Re^{n \times k}\), l2_regularizer=\(\lambda\).

If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||A Z - B||_F^2 + \lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as \(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z||F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\) is sufficiently large.

If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.

Args: matrix: Tensor of shape [..., M, N]. rhs: Tensor of shape [..., M, K]. l2_regularizer: 0-D double Tensor. Ignored if fast=False. fast: bool. Defaults to True. name: string, optional name of the operation.

Returns: output: Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by-K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_transpose(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_transpose, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_transpose

Return

Applicative

Origial documentation for Builder.matrix_transpose

def matrix_transpose(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_transpose to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_transpose

def matrix_transpose(a, name="matrix_transpose")

Transposes last two dimensions of tensor a.

For example:

```python

Matrix with no batch dimension.

'x' is [[1 2 3]

[4 5 6]]

tf.matrix_transpose(x) ==> [[1 4] [2 5] [3 6]]

Matrix with two batch dimensions.

x.shape is [1, 2, 3, 4]

tf.matrix_transpose(x) is shape [1, 2, 4, 3]

```

Args: a: A Tensor with rank >= 2. name: A name for the operation (optional).

Returns: A transposed batch matrix Tensor.

Raises: ValueError: If a is determined statically to have rank < 2.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_transpose_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_transpose_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_transpose_layer

Return

Applicative

Origial documentation for Builder.matrix_transpose_layer

def matrix_transpose_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_transpose, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_transpose

def matrix_transpose(a, name="matrix_transpose"):

Transposes last two dimensions of tensor a.

For example:

```python

Matrix with no batch dimension.

'x' is [[1 2 3]

[4 5 6]]

tf.matrix_transpose(x) ==> [[1 4] [2 5] [3 6]]

Matrix with two batch dimensions.

x.shape is [1, 2, 3, 4]

tf.matrix_transpose(x) is shape [1, 2, 4, 3]

```

Args: a: A Tensor with rank >= 2. name: A name for the operation (optional).

Returns: A transposed batch matrix Tensor.

Raises: ValueError: If a is determined statically to have rank < 2.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_triangular_solve(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_triangular_solve, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_triangular_solve

Return

Applicative

Origial documentation for Builder.matrix_triangular_solve

def matrix_triangular_solve(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.matrix_triangular_solve to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.matrix_triangular_solve

def matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)

Solves systems of linear equations with upper or lower triangular matrices by

backsubstitution.

matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. If lower is True then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If lower is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. rhs is a tensor of shape [..., M, K].

The output is a tensor of shape [..., M, K]. If adjoint is True then the innermost matrices in outputsatisfy matrix equationsmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. IfadjointisFalsethen the strictly then the innermost matrices inoutputsatisfy matrix equationsadjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.

Args: matrix: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. rhs: A Tensor. Must have the same type as matrix. Shape is [..., M, K]. lower: An optional bool. Defaults to True. Boolean indicating whether the innermost matrices in matrix are lower or upper triangular. adjoint: An optional bool. Defaults to False. Boolean indicating whether to solve with matrix or its (block-wise) adjoint. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as matrix. Shape is [..., M, K].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def matrix_triangular_solve_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.matrix_triangular_solve_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.matrix_triangular_solve_layer

Return

Applicative

Origial documentation for Builder.matrix_triangular_solve_layer

def matrix_triangular_solve_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.matrix_triangular_solve, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.matrix_triangular_solve

def matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None):

Solves systems of linear equations with upper or lower triangular matrices by

backsubstitution.

matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. If lower is True then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If lower is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. rhs is a tensor of shape [..., M, K].

The output is a tensor of shape [..., M, K]. If adjoint is True then the innermost matrices in outputsatisfy matrix equationsmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. IfadjointisFalsethen the strictly then the innermost matrices inoutputsatisfy matrix equationsadjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.

Args: matrix: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M]. rhs: A Tensor. Must have the same type as matrix. Shape is [..., M, K]. lower: An optional bool. Defaults to True. Boolean indicating whether the innermost matrices in matrix are lower or upper triangular. adjoint: An optional bool. Defaults to False. Boolean indicating whether to solve with matrix or its (block-wise) adjoint. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as matrix. Shape is [..., M, K].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool

Return

Applicative

Origial documentation for Builder.max_pool

def max_pool(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.max_pool to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.max_pool

def max_pool(value, ksize, strides, padding, data_format="NHWC", name=None)

Performs the max pooling on the input.

Args: value: A 4-D Tensor with shape [batch, height, width, channels] and type tf.float32. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here data_format: A string. 'NHWC' and 'NCHW' are supported. name: Optional name for the operation.

Returns: A Tensor with type tf.float32. The max pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool3d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool3d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool3d

Return

Applicative

Origial documentation for Builder.max_pool3d

def max_pool3d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.max_pool3d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.max_pool3d

def max_pool3d(input, ksize, strides, padding, name=None)

Performs 3D max pooling on the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, channels] tensor to pool over. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The max pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool3d_grad(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool3d_grad, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool3d_grad

Return

Applicative

Origial documentation for Builder.max_pool3d_grad

def max_pool3d_grad(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.max_pool3d_grad to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.max_pool3d_grad

def max_pool3d_grad(orig_input, orig_output, grad, ksize, strides, padding, name=None)

Computes gradients of max pooling function.

Args: orig_input: A Tensor of type float32. The original input tensor. orig_output: A Tensor of type float32. The original output tensor. grad: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Output backprop of shape [batch, depth, rows, cols, channels]. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as grad.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool3d_grad_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool3d_grad_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool3d_grad_layer

Return

Applicative

Origial documentation for Builder.max_pool3d_grad_layer

def max_pool3d_grad_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.max_pool3d_grad, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.max_pool3d_grad

def max_pool3d_grad(orig_input, orig_output, grad, ksize, strides, padding, name=None):

Computes gradients of max pooling function.

Args: orig_input: A Tensor of type float32. The original input tensor. orig_output: A Tensor of type float32. The original output tensor. grad: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Output backprop of shape [batch, depth, rows, cols, channels]. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as grad.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool3d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool3d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool3d_layer

Return

Applicative

Origial documentation for Builder.max_pool3d_layer

def max_pool3d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.max_pool3d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.max_pool3d

def max_pool3d(input, ksize, strides, padding, name=None):

Performs 3D max pooling on the input.

Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Shape [batch, depth, rows, cols, channels] tensor to pool over. ksize: A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides: A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The max pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool_2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool_2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool_2d

Return

Applicative

Origial documentation for Builder.max_pool_2d

def max_pool_2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tflearn.layers.max_pool_2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tflearn.layers.max_pool_2d

def max_pool_2d(incoming, kernel_size, strides=None, padding="same", name="MaxPool2D")

Max Pooling 2D.

Input: 4-D Tensor [batch, height, width, in_channels].

Output: 4-D Tensor [batch, pooled height, pooled width, in_channels].

Arguments: incoming: Tensor. Incoming 4-D Layer. kernel_size: 'intorlist of int. Pooling kernel size. strides: 'int or list of int. Strides of conv operation. Default: same as kernel_size. padding: str from "same", "valid". Padding algo to use. Default: 'same'. name: A name for this layer (optional). Default: 'MaxPool2D'.

Attributes: scope: Scope. This layer scope.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool_layer

Return

Applicative

Origial documentation for Builder.max_pool_layer

def max_pool_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.max_pool, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.max_pool

def max_pool(value, ksize, strides, padding, data_format="NHWC", name=None):

Performs the max pooling on the input.

Args: value: A 4-D Tensor with shape [batch, height, width, channels] and type tf.float32. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here data_format: A string. 'NHWC' and 'NCHW' are supported. name: Optional name for the operation.

Returns: A Tensor with type tf.float32. The max pooled output tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool_with_argmax(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool_with_argmax, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool_with_argmax

Return

Applicative

Origial documentation for Builder.max_pool_with_argmax

def max_pool_with_argmax(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.max_pool_with_argmax to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.max_pool_with_argmax

def max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None)

Performs max pooling on the input and outputs both max values and indices.

The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index ((b * height + y) * width + x) * channels + c.

Args: input: A Tensor. Must be one of the following types: float32, half. 4-D with shape [batch, height, width, channels]. Input to pool over. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. Targmax: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (output, argmax). output: A Tensor. Has the same type as input. The max pooled output tensor. argmax: A Tensor of type Targmax. 4-D. The flattened indices of the max values chosen for each output.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def max_pool_with_argmax_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.max_pool_with_argmax_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.max_pool_with_argmax_layer

Return

Applicative

Origial documentation for Builder.max_pool_with_argmax_layer

def max_pool_with_argmax_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.max_pool_with_argmax, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.max_pool_with_argmax

def max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None):

Performs max pooling on the input and outputs both max values and indices.

The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index ((b * height + y) * width + x) * channels + c.

Args: input: A Tensor. Must be one of the following types: float32, half. 4-D with shape [batch, height, width, channels]. Input to pool over. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string from: "SAME", "VALID". The type of padding algorithm to use. Targmax: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (output, argmax). output: A Tensor. Has the same type as input. The max pooled output tensor. argmax: A Tensor of type Targmax. 4-D. The flattened indices of the max values chosen for each output.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def maximize(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.maximize, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.maximize

Return

Applicative

Origial documentation for Builder.maximize

def maximize(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tensorbuilder.Builder.maximize to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tensorbuilder.Builder.maximize

def maximize(tensor, optimizer)

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def maximum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.maximum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.maximum

Return

Applicative

Origial documentation for Builder.maximum

def maximum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.maximum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.maximum

def maximum(x, y, name=None)

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

NOTE: Maximum supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def maximum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.maximum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.maximum_layer

Return

Applicative

Origial documentation for Builder.maximum_layer

def maximum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.maximum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.maximum

def maximum(x, y, name=None):

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

NOTE: Maximum supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def merge_all_summaries(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.merge_all_summaries, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.merge_all_summaries

Return

Applicative

Origial documentation for Builder.merge_all_summaries

def merge_all_summaries(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.merge_all_summaries to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.merge_all_summaries

def merge_all_summaries(key="summaries")

Merges all summaries collected in the default graph.

Args: key: GraphKey used to collect the summaries. Defaults to GraphKeys.SUMMARIES.

Returns: If no summaries were collected, returns None. Otherwise returns a scalar Tensor of type string containing the serialized Summary protocol buffer resulting from the merging.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def merge_all_summaries_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.merge_all_summaries_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.merge_all_summaries_layer

Return

Applicative

Origial documentation for Builder.merge_all_summaries_layer

def merge_all_summaries_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.merge_all_summaries, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.merge_all_summaries

def merge_all_summaries(key="summaries"):

Merges all summaries collected in the default graph.

Args: key: GraphKey used to collect the summaries. Defaults to GraphKeys.SUMMARIES.

Returns: If no summaries were collected, returns None. Otherwise returns a scalar Tensor of type string containing the serialized Summary protocol buffer resulting from the merging.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def merge_summary(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.merge_summary, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.merge_summary

Return

Applicative

Origial documentation for Builder.merge_summary

def merge_summary(builder, tag):

THIS METHOD IS AUTOMATICALLY GENERATED

Same as tf.merge_summary(inputs, collections=None, name=None) but the the with the summery tensor as its first parameter.

Return

Builder

Origial documentation for tf.merge_summary

def merge_summary(inputs, collections=None, name=None):

Merges summaries.

This op creates a Summary protocol buffer that contains the union of all the values in the input summaries.

When the Op is run, it reports an InvalidArgument error if multiple values in the summaries to merge use the same tag.

Args: inputs: A list of string Tensor objects containing serialized Summary protocol buffers. collections: Optional list of graph collections keys. The new summary op is added to these collections. Defaults to [GraphKeys.SUMMARIES]. name: A name for the operation (optional).

Returns: A scalar Tensor of type string. The serialized Summary protocol buffer resulting from the merging.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def meshgrid(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.meshgrid, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.meshgrid

Return

Applicative

Origial documentation for Builder.meshgrid

def meshgrid(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.meshgrid to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.meshgrid

def meshgrid()

Broadcasts parameters for evaluation on an N-D grid.

Given N one-dimensional coordinate arrays *args, returns a list outputs of N-D coordinate arrays for evaluating expressions on an N-D grid.

Notes:

meshgrid supports cartesian ('xy') and matrix ('ij') indexing conventions. When the indexing argument is set to 'xy' (the default), the broadcasting instructions for the first two dimensions are swapped.

Examples:

Calling X, Y = meshgrid(x, y) with the tensors

prettyprint x = [1, 2, 3] y = [4, 5, 6]

results in

prettyprint X = [[1, 1, 1], [2, 2, 2], [3, 3, 3]] Y = [[4, 5, 6], [4, 5, 6], [4, 5, 6]]

Args: *args: Tensors with rank 1 indexing: Either 'xy' or 'ij' (optional, default: 'xy') name: A name for the operation (optional).

Returns: outputs: A list of N Tensors with rank N

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def meshgrid_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.meshgrid_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.meshgrid_layer

Return

Applicative

Origial documentation for Builder.meshgrid_layer

def meshgrid_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.meshgrid, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.meshgrid

def meshgrid():

Broadcasts parameters for evaluation on an N-D grid.

Given N one-dimensional coordinate arrays *args, returns a list outputs of N-D coordinate arrays for evaluating expressions on an N-D grid.

Notes:

meshgrid supports cartesian ('xy') and matrix ('ij') indexing conventions. When the indexing argument is set to 'xy' (the default), the broadcasting instructions for the first two dimensions are swapped.

Examples:

Calling X, Y = meshgrid(x, y) with the tensors

prettyprint x = [1, 2, 3] y = [4, 5, 6]

results in

prettyprint X = [[1, 1, 1], [2, 2, 2], [3, 3, 3]] Y = [[4, 5, 6], [4, 5, 6], [4, 5, 6]]

Args: *args: Tensors with rank 1 indexing: Either 'xy' or 'ij' (optional, default: 'xy') name: A name for the operation (optional).

Returns: outputs: A list of N Tensors with rank N

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def min_max_variable_partitioner(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.min_max_variable_partitioner, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.min_max_variable_partitioner

Return

Applicative

Origial documentation for Builder.min_max_variable_partitioner

def min_max_variable_partitioner(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.min_max_variable_partitioner to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.min_max_variable_partitioner

def min_max_variable_partitioner(max_partitions=1, axis=0, min_slice_size=262144, bytes_per_string_element=16)

Partitioner to allocate minimum size per slice.

Returns a partitioner that partitions the variable of given shape and dtype such that each partition has a minimum of min_slice_size slice of the variable. The maximum number of such partitions (upper bound) is given by max_partitions.

Args: max_partitions: Upper bound on the number of partitions. Defaults to 1. axis: Axis along which to partition the variable. Defaults to 0. min_slice_size: Minimum size of the variable slice per partition. Defaults to 256K. bytes_per_string_element: If the Variable is of type string, this provides an estimate of how large each scalar in the Variable is.

Returns: A partition function usable as the partitioner argument to variable_scope, get_variable, and get_partitioned_variable_list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def min_max_variable_partitioner_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.min_max_variable_partitioner_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.min_max_variable_partitioner_layer

Return

Applicative

Origial documentation for Builder.min_max_variable_partitioner_layer

def min_max_variable_partitioner_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.min_max_variable_partitioner, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.min_max_variable_partitioner

def min_max_variable_partitioner(max_partitions=1, axis=0, min_slice_size=262144, bytes_per_string_element=16):

Partitioner to allocate minimum size per slice.

Returns a partitioner that partitions the variable of given shape and dtype such that each partition has a minimum of min_slice_size slice of the variable. The maximum number of such partitions (upper bound) is given by max_partitions.

Args: max_partitions: Upper bound on the number of partitions. Defaults to 1. axis: Axis along which to partition the variable. Defaults to 0. min_slice_size: Minimum size of the variable slice per partition. Defaults to 256K. bytes_per_string_element: If the Variable is of type string, this provides an estimate of how large each scalar in the Variable is.

Returns: A partition function usable as the partitioner argument to variable_scope, get_variable, and get_partitioned_variable_list.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def minimize(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.minimize, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.minimize

Return

Applicative

Origial documentation for Builder.minimize

def minimize(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tensorbuilder.Builder.minimize to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tensorbuilder.Builder.minimize

def minimize(tensor, optimizer)

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def minimum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.minimum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.minimum

Return

Applicative

Origial documentation for Builder.minimum

def minimum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.minimum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.minimum

def minimum(x, y, name=None)

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

NOTE: Minimum supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def minimum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.minimum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.minimum_layer

Return

Applicative

Origial documentation for Builder.minimum_layer

def minimum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.minimum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.minimum

def minimum(x, y, name=None):

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

NOTE: Minimum supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def mod(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.mod, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.mod

Return

Applicative

Origial documentation for Builder.mod

def mod(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.mod to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.mod

def mod(x, y, name=None)

Returns element-wise remainder of division.

NOTE: Mod supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: int32, int64, float32, float64. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def mod_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.mod_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.mod_layer

Return

Applicative

Origial documentation for Builder.mod_layer

def mod_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.mod, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.mod

def mod(x, y, name=None):

Returns element-wise remainder of division.

NOTE: Mod supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: int32, int64, float32, float64. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def model_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.model_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.model_variables

Return

Applicative

Origial documentation for Builder.model_variables

def model_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.model_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.model_variables

def model_variables()

Returns all variables in the MODEL_VARIABLES collection.

Returns: A list of local Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def model_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.model_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.model_variables_layer

Return

Applicative

Origial documentation for Builder.model_variables_layer

def model_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.model_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.model_variables

def model_variables():

Returns all variables in the MODEL_VARIABLES collection.

Returns: A list of local Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def moments(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.moments, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.moments

Return

Applicative

Origial documentation for Builder.moments

def moments(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.moments to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.moments

def moments(x, axes, shift=None, name=None, keep_dims=False)

Calculate the mean and variance of x.

The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.

When using these moments for batch normalization (see tf.nn.batch_normalization): * for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass axes=[0, 1, 2]. * for simple batch normalization pass axes=[0] (batch only).

Args: x: A Tensor. axes: array of ints. Axes along which to compute mean and variance. shift: A Tensor containing the value by which to shift the data for numerical stability, or None if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. name: Name used to scope the operations that compute the moments. keep_dims: produce moments with the same dimensionality as the input.

Returns: Two Tensor objects: mean and variance.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def moments_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.moments_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.moments_layer

Return

Applicative

Origial documentation for Builder.moments_layer

def moments_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.moments, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.moments

def moments(x, axes, shift=None, name=None, keep_dims=False):

Calculate the mean and variance of x.

The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.

When using these moments for batch normalization (see tf.nn.batch_normalization): * for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass axes=[0, 1, 2]. * for simple batch normalization pass axes=[0] (batch only).

Args: x: A Tensor. axes: array of ints. Axes along which to compute mean and variance. shift: A Tensor containing the value by which to shift the data for numerical stability, or None if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. name: Name used to scope the operations that compute the moments. keep_dims: produce moments with the same dimensionality as the input.

Returns: Two Tensor objects: mean and variance.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def moving_average_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.moving_average_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.moving_average_variables

Return

Applicative

Origial documentation for Builder.moving_average_variables

def moving_average_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.moving_average_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.moving_average_variables

def moving_average_variables()

Returns all variables that maintain their moving averages.

If an ExponentialMovingAverage object is created and the apply() method is called on a list of variables, these variables will be added to the GraphKeys.MOVING_AVERAGE_VARIABLES collection. This convenience function returns the contents of that collection.

Returns: A list of Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def moving_average_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.moving_average_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.moving_average_variables_layer

Return

Applicative

Origial documentation for Builder.moving_average_variables_layer

def moving_average_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.moving_average_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.moving_average_variables

def moving_average_variables():

Returns all variables that maintain their moving averages.

If an ExponentialMovingAverage object is created and the apply() method is called on a list of variables, these variables will be added to the GraphKeys.MOVING_AVERAGE_VARIABLES collection. This convenience function returns the contents of that collection.

Returns: A list of Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def mul(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.mul, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.mul

Return

Applicative

Origial documentation for Builder.mul

def mul(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.mul to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.mul

def mul(x, y, name=None)

Returns x * y element-wise.

NOTE: Mul supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def mul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.mul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.mul_layer

Return

Applicative

Origial documentation for Builder.mul_layer

def mul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.mul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.mul

def mul(x, y, name=None):

Returns x * y element-wise.

NOTE: Mul supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def multinomial(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.multinomial, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.multinomial

Return

Applicative

Origial documentation for Builder.multinomial

def multinomial(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.multinomial to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.multinomial

def multinomial(logits, num_samples, seed=None, name=None)

Draws samples from a multinomial distribution.

Example:

```python

samples has shape [1, 5], where each value is either 0 or 1 with equal

probability.

samples = tf.multinomial(tf.log([[10., 10.]]), 5) ```

Args: logits: 2-D Tensor with shape [batch_size, num_classes]. Each slice [i, :] represents the unnormalized log probabilities for all classes. num_samples: 0-D. Number of independent samples to draw for each row slice. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: Optional name for the operation.

Returns: The drawn samples of shape [batch_size, num_samples].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def multinomial_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.multinomial_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.multinomial_layer

Return

Applicative

Origial documentation for Builder.multinomial_layer

def multinomial_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.multinomial, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.multinomial

def multinomial(logits, num_samples, seed=None, name=None):

Draws samples from a multinomial distribution.

Example:

```python

samples has shape [1, 5], where each value is either 0 or 1 with equal

probability.

samples = tf.multinomial(tf.log([[10., 10.]]), 5) ```

Args: logits: 2-D Tensor with shape [batch_size, num_classes]. Each slice [i, :] represents the unnormalized log probabilities for all classes. num_samples: 0-D. Number of independent samples to draw for each row slice. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: Optional name for the operation.

Returns: The drawn samples of shape [batch_size, num_samples].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def name_scope(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.name_scope, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.name_scope

Return

Applicative

Origial documentation for Builder.name_scope

def name_scope(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.name_scope to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.name_scope

def name_scope()

Returns a context manager for use when defining a Python op.

This context manager validates that the given values are from the same graph, makes that graph the default graph, and pushes a name scope in that graph (see Graph.name_scope() for more details on that).

For example, to define a new Python op called my_op:

python def my_op(a, b, c, name=None): with tf.name_scope(name, "MyOp", [a, b, c]) as scope: a = tf.convert_to_tensor(a, name="a") b = tf.convert_to_tensor(b, name="b") c = tf.convert_to_tensor(c, name="c") # Define some computation that uses `a`, `b`, and `c`. return foo_op(..., name=scope)

Args: name: The name argument that is passed to the op function. default_name: The default name to use if the name argument is None. values: The list of Tensor arguments that are passed to the op function.

Returns: A context manager for use in defining Python ops. Yields the name scope.

Raises: ValueError: if neither name nor default_name is provided but values are.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def name_scope_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.name_scope_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.name_scope_layer

Return

Applicative

Origial documentation for Builder.name_scope_layer

def name_scope_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.name_scope, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.name_scope

def name_scope():

Returns a context manager for use when defining a Python op.

This context manager validates that the given values are from the same graph, makes that graph the default graph, and pushes a name scope in that graph (see Graph.name_scope() for more details on that).

For example, to define a new Python op called my_op:

python def my_op(a, b, c, name=None): with tf.name_scope(name, "MyOp", [a, b, c]) as scope: a = tf.convert_to_tensor(a, name="a") b = tf.convert_to_tensor(b, name="b") c = tf.convert_to_tensor(c, name="c") # Define some computation that uses `a`, `b`, and `c`. return foo_op(..., name=scope)

Args: name: The name argument that is passed to the op function. default_name: The default name to use if the name argument is None. values: The list of Tensor arguments that are passed to the op function.

Returns: A context manager for use in defining Python ops. Yields the name scope.

Raises: ValueError: if neither name nor default_name is provided but values are.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def nce_loss(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.nce_loss, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.nce_loss

Return

Applicative

Origial documentation for Builder.nce_loss

def nce_loss(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.nce_loss to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.nce_loss

def nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy="mod", name="nce_loss")

Computes and returns the noise-contrastive estimation training loss.

See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models] (http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference] (../../extras/candidate_sampling.pdf)

Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see log_uniform_candidate_sampler.

Note: In the case where num_true > 1, we assign to each target class the target probability 1 / num_true so that the target probabilities sum to 1 per-example.

Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.

Args: weights: A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings. biases: A Tensor of shape [num_classes]. The class biases. inputs: A Tensor of shape [batch_size, dim]. The forward activations of the input network. labels: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_sampled: An int. The number of classes to randomly sample per batch. num_classes: An int. The number of possible classes. num_true: An int. The number of target classes per training example. sampled_values: a tuple of (sampled_candidates, true_expected_count, sampled_expected_count) returned by a *_candidate_sampler function. (if None, we default to log_uniform_candidate_sampler) remove_accidental_hits: A bool. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to True, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our [Candidate Sampling Algorithms Reference] (../../extras/candidate_sampling.pdf). Default is False. partition_strategy: A string specifying the partitioning strategy, relevant if len(weights) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name: A name for the operation (optional).

Returns: A batch_size 1-D tensor of per-example NCE losses.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def nce_loss_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.nce_loss_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.nce_loss_layer

Return

Applicative

Origial documentation for Builder.nce_loss_layer

def nce_loss_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.nce_loss, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.nce_loss

def nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy="mod", name="nce_loss"):

Computes and returns the noise-contrastive estimation training loss.

See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models] (http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference] (../../extras/candidate_sampling.pdf)

Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see log_uniform_candidate_sampler.

Note: In the case where num_true > 1, we assign to each target class the target probability 1 / num_true so that the target probabilities sum to 1 per-example.

Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.

Args: weights: A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings. biases: A Tensor of shape [num_classes]. The class biases. inputs: A Tensor of shape [batch_size, dim]. The forward activations of the input network. labels: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_sampled: An int. The number of classes to randomly sample per batch. num_classes: An int. The number of possible classes. num_true: An int. The number of target classes per training example. sampled_values: a tuple of (sampled_candidates, true_expected_count, sampled_expected_count) returned by a *_candidate_sampler function. (if None, we default to log_uniform_candidate_sampler) remove_accidental_hits: A bool. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to True, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our [Candidate Sampling Algorithms Reference] (../../extras/candidate_sampling.pdf). Default is False. partition_strategy: A string specifying the partitioning strategy, relevant if len(weights) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name: A name for the operation (optional).

Returns: A batch_size 1-D tensor of per-example NCE losses.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def neg(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.neg, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.neg

Return

Applicative

Origial documentation for Builder.neg

def neg(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.neg to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.neg

def neg(x, name=None)

Computes numerical negative value element-wise.

I.e., (y = -x).

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def neg_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.neg_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.neg_layer

Return

Applicative

Origial documentation for Builder.neg_layer

def neg_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.neg, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.neg

def neg(x, name=None):

Computes numerical negative value element-wise.

I.e., (y = -x).

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def no_op(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.no_op, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.no_op

Return

Applicative

Origial documentation for Builder.no_op

def no_op(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.no_op to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.no_op

def no_op(name=None)

Does nothing. Only useful as a placeholder for control edges.

Args: name: A name for the operation (optional).

Returns: The created Operation.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def no_op_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.no_op_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.no_op_layer

Return

Applicative

Origial documentation for Builder.no_op_layer

def no_op_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.no_op, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.no_op

def no_op(name=None):

Does nothing. Only useful as a placeholder for control edges.

Args: name: A name for the operation (optional).

Returns: The created Operation.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def no_regularizer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.no_regularizer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.no_regularizer

Return

Applicative

Origial documentation for Builder.no_regularizer

def no_regularizer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.no_regularizer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.no_regularizer

def no_regularizer(_)

Use this function to prevent regularization of variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def no_regularizer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.no_regularizer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.no_regularizer_layer

Return

Applicative

Origial documentation for Builder.no_regularizer_layer

def no_regularizer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.no_regularizer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.no_regularizer

def no_regularizer(_):

Use this function to prevent regularization of variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def normalize_moments(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.normalize_moments, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.normalize_moments

Return

Applicative

Origial documentation for Builder.normalize_moments

def normalize_moments(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.normalize_moments to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.normalize_moments

def normalize_moments(counts, mean_ss, variance_ss, shift, name=None)

Calculate the mean and variance of based on the sufficient statistics.

Args: counts: A Tensor containing a the total count of the data (one value). mean_ss: A Tensor containing the mean sufficient statistics: the (possibly shifted) sum of the elements to average over. variance_ss: A Tensor containing the variance sufficient statistics: the (possibly shifted) squared sum of the data to compute the variance over. shift: A Tensor containing the value by which the data is shifted for numerical stability, or None if no shift was performed. name: Name used to scope the operations that compute the moments.

Returns: Two Tensor objects: mean and variance.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def normalize_moments_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.normalize_moments_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.normalize_moments_layer

Return

Applicative

Origial documentation for Builder.normalize_moments_layer

def normalize_moments_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.normalize_moments, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.normalize_moments

def normalize_moments(counts, mean_ss, variance_ss, shift, name=None):

Calculate the mean and variance of based on the sufficient statistics.

Args: counts: A Tensor containing a the total count of the data (one value). mean_ss: A Tensor containing the mean sufficient statistics: the (possibly shifted) sum of the elements to average over. variance_ss: A Tensor containing the variance sufficient statistics: the (possibly shifted) squared sum of the data to compute the variance over. shift: A Tensor containing the value by which the data is shifted for numerical stability, or None if no shift was performed. name: Name used to scope the operations that compute the moments.

Returns: Two Tensor objects: mean and variance.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def not_equal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.not_equal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.not_equal

Return

Applicative

Origial documentation for Builder.not_equal

def not_equal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.not_equal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.not_equal

def not_equal(x, y, name=None)

Returns the truth value of (x != y) element-wise.

NOTE: NotEqual supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, int16, int32, int64, complex64, quint8, qint8, qint32, string, bool, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def not_equal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.not_equal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.not_equal_layer

Return

Applicative

Origial documentation for Builder.not_equal_layer

def not_equal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.not_equal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.not_equal

def not_equal(x, y, name=None):

Returns the truth value of (x != y) element-wise.

NOTE: NotEqual supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, uint8, int8, int16, int32, int64, complex64, quint8, qint8, qint32, string, bool, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor of type bool.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def one_hot(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.one_hot, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.one_hot

Return

Applicative

Origial documentation for Builder.one_hot

def one_hot(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.one_hot to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.one_hot

def one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)

Returns a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

on_value and off_value must have matching data types. If dtype is also provided, they must be the same data type as specified by dtype.

If on_value is not provided, it will default to the value 1 with type dtype

If off_value is not provided, it will default to the value 0 with type dtype

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (default: the new axis is appended at the end).

If indices is a scalar the output shape will be a vector of length depth

If indices is a vector of length features, the output shape will be: features x depth if axis == -1 depth x features if axis == 0

If indices is a matrix (batch) with shape [batch, features], the output shape will be: batch x features x depth if axis == -1 batch x depth x features if axis == 1 depth x batch x features if axis == 0

If dtype is not provided, it will attempt to assume the data type of on_value or off_value, if one or both are passed in. If none of on_value, off_value, or dtype are provided, dtype will default to the value tf.float32

Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.), both on_value and off_value must be provided to one_hot

Examples

Suppose that

indices = [0, 2, -1, 1] depth = 3 on_value = 5.0 off_value = 0.0 axis = -1

Then output is [4 x 3]:

output = [5.0 0.0 0.0] // one_hot(0) [0.0 0.0 5.0] // one_hot(2) [0.0 0.0 0.0] // one_hot(-1) [0.0 5.0 0.0] // one_hot(1)

Suppose that

indices = [[0, 2], [1, -1]] depth = 3 on_value = 1.0 off_value = 0.0 axis = -1

Then output is [2 x 2 x 3]:

output = [ [1.0, 0.0, 0.0] // one_hot(0) [0.0, 0.0, 1.0] // one_hot(2) ][ [0.0, 1.0, 0.0] // one_hot(1) [0.0, 0.0, 0.0] // one_hot(-1) ]

Using default values for on_value and off_value:

indices = [0, 1, 2] depth = 3

The output will be

output = [[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]

Args: indices: A Tensor of indices. depth: A scalar defining the depth of the one hot dimension. on_value: A scalar defining the value to fill in output when indices[j] = i. (default: 1) off_value: A scalar defining the value to fill in output when indices[j] != i. (default: 0) axis: The axis to fill (default: -1, a new inner-most axis). dtype: The data type of the output tensor.

Returns: output: The one-hot tensor.

Raises: TypeError: If dtype of either on_value or off_value don't match dtype TypeError: If dtype of on_value and off_value don't match one another

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def one_hot_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.one_hot_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.one_hot_layer

Return

Applicative

Origial documentation for Builder.one_hot_layer

def one_hot_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.one_hot, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.one_hot

def one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None):

Returns a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

on_value and off_value must have matching data types. If dtype is also provided, they must be the same data type as specified by dtype.

If on_value is not provided, it will default to the value 1 with type dtype

If off_value is not provided, it will default to the value 0 with type dtype

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (default: the new axis is appended at the end).

If indices is a scalar the output shape will be a vector of length depth

If indices is a vector of length features, the output shape will be: features x depth if axis == -1 depth x features if axis == 0

If indices is a matrix (batch) with shape [batch, features], the output shape will be: batch x features x depth if axis == -1 batch x depth x features if axis == 1 depth x batch x features if axis == 0

If dtype is not provided, it will attempt to assume the data type of on_value or off_value, if one or both are passed in. If none of on_value, off_value, or dtype are provided, dtype will default to the value tf.float32

Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.), both on_value and off_value must be provided to one_hot

Examples

Suppose that

indices = [0, 2, -1, 1] depth = 3 on_value = 5.0 off_value = 0.0 axis = -1

Then output is [4 x 3]:

output = [5.0 0.0 0.0] // one_hot(0) [0.0 0.0 5.0] // one_hot(2) [0.0 0.0 0.0] // one_hot(-1) [0.0 5.0 0.0] // one_hot(1)

Suppose that

indices = [[0, 2], [1, -1]] depth = 3 on_value = 1.0 off_value = 0.0 axis = -1

Then output is [2 x 2 x 3]:

output = [ [1.0, 0.0, 0.0] // one_hot(0) [0.0, 0.0, 1.0] // one_hot(2) ][ [0.0, 1.0, 0.0] // one_hot(1) [0.0, 0.0, 0.0] // one_hot(-1) ]

Using default values for on_value and off_value:

indices = [0, 1, 2] depth = 3

The output will be

output = [[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]

Args: indices: A Tensor of indices. depth: A scalar defining the depth of the one hot dimension. on_value: A scalar defining the value to fill in output when indices[j] = i. (default: 1) off_value: A scalar defining the value to fill in output when indices[j] != i. (default: 0) axis: The axis to fill (default: -1, a new inner-most axis). dtype: The data type of the output tensor.

Returns: output: The one-hot tensor.

Raises: TypeError: If dtype of either on_value or off_value don't match dtype TypeError: If dtype of on_value and off_value don't match one another

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ones(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ones, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ones

Return

Applicative

Origial documentation for Builder.ones

def ones(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ones to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ones

def ones(shape, dtype=<dtype: 'float32'>, name=None)

Creates a tensor with all elements set to 1.

This operation returns a tensor of type dtype with shape shape and all elements set to 1.

For example:

python tf.ones([2, 3], tf.int32) ==> [[1, 1, 1], [1, 1, 1]]

Args: shape: Either a list of integers, or a 1-D Tensor of type int32. dtype: The type of an element in the resulting Tensor. name: A name for the operation (optional).

Returns: A Tensor with all elements set to 1.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ones_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ones_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ones_initializer

Return

Applicative

Origial documentation for Builder.ones_initializer

def ones_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ones_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ones_initializer

def ones_initializer(shape, dtype=<dtype: 'float32'>, partition_info=None)

An adaptor for ones() to match the Initializer spec.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ones_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ones_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ones_initializer_layer

Return

Applicative

Origial documentation for Builder.ones_initializer_layer

def ones_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ones_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ones_initializer

def ones_initializer(shape, dtype=<dtype: 'float32'>, partition_info=None):

An adaptor for ones() to match the Initializer spec.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ones_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ones_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ones_layer

Return

Applicative

Origial documentation for Builder.ones_layer

def ones_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ones, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ones

def ones(shape, dtype=<dtype: 'float32'>, name=None):

Creates a tensor with all elements set to 1.

This operation returns a tensor of type dtype with shape shape and all elements set to 1.

For example:

python tf.ones([2, 3], tf.int32) ==> [[1, 1, 1], [1, 1, 1]]

Args: shape: Either a list of integers, or a 1-D Tensor of type int32. dtype: The type of an element in the resulting Tensor. name: A name for the operation (optional).

Returns: A Tensor with all elements set to 1.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ones_like(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ones_like, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ones_like

Return

Applicative

Origial documentation for Builder.ones_like

def ones_like(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.ones_like to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.ones_like

def ones_like(tensor, dtype=None, name=None, optimize=True)

Creates a tensor with all elements set to 1.

Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to 1. Optionally, you can specify a new type (dtype) for the returned tensor.

For example:

```python

'tensor' is [[1, 2, 3], [4, 5, 6]]

tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]] ```

Args: tensor: A Tensor. dtype: A type for the returned Tensor. Must be float32, float64, int8, int16, int32, int64, uint8, complex64, complex128 or bool. name: A name for the operation (optional). optimize: if true, attempt to statically determine the shape of 'tensor' and encode it as a constant.

Returns: A Tensor with all elements set to 1.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def ones_like_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.ones_like_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.ones_like_layer

Return

Applicative

Origial documentation for Builder.ones_like_layer

def ones_like_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.ones_like, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.ones_like

def ones_like(tensor, dtype=None, name=None, optimize=True):

Creates a tensor with all elements set to 1.

Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to 1. Optionally, you can specify a new type (dtype) for the returned tensor.

For example:

```python

'tensor' is [[1, 2, 3], [4, 5, 6]]

tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]] ```

Args: tensor: A Tensor. dtype: A type for the returned Tensor. Must be float32, float64, int8, int16, int32, int64, uint8, complex64, complex128 or bool. name: A name for the operation (optional). optimize: if true, attempt to statically determine the shape of 'tensor' and encode it as a constant.

Returns: A Tensor with all elements set to 1.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def op_scope(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.op_scope, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.op_scope

Return

Applicative

Origial documentation for Builder.op_scope

def op_scope(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.op_scope to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.op_scope

def op_scope()

DEPRECATED. Same as name_scope above, just different argument order.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def op_scope_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.op_scope_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.op_scope_layer

Return

Applicative

Origial documentation for Builder.op_scope_layer

def op_scope_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.op_scope, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.op_scope

def op_scope():

DEPRECATED. Same as name_scope above, just different argument order.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pack(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.pack, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.pack

Return

Applicative

Origial documentation for Builder.pack

def pack(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.pack to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.pack

def pack(values, axis=0, name="pack")

Packs a list of rank-R tensors into one rank-(R+1) tensor.

Packs the list of tensors in values into a tensor with rank one higher than each tensor in values, by packing them along the axis dimension. Given a list of length N of tensors of shape (A, B, C);

if axis == 0 then the output tensor will have the shape (N, A, B, C). if axis == 1 then the output tensor will have the shape (A, N, B, C). Etc.

For example:

```prettyprint

'x' is [1, 4]

'y' is [2, 5]

'z' is [3, 6]

pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]] ```

This is the opposite of unpack. The numpy equivalent is

tf.pack([x, y, z]) = np.asarray([x, y, z])

Args: values: A list of Tensor objects with the same shape and type. axis: An int. The axis to pack along. Defaults to the first dimension. Supports negative indexes. name: A name for this operation (optional).

Returns: output: A packed Tensor with the same type as values.

Raises: ValueError: If axis is out of the range [-(R+1), R+1).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pack_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.pack_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.pack_layer

Return

Applicative

Origial documentation for Builder.pack_layer

def pack_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.pack, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.pack

def pack(values, axis=0, name="pack"):

Packs a list of rank-R tensors into one rank-(R+1) tensor.

Packs the list of tensors in values into a tensor with rank one higher than each tensor in values, by packing them along the axis dimension. Given a list of length N of tensors of shape (A, B, C);

if axis == 0 then the output tensor will have the shape (N, A, B, C). if axis == 1 then the output tensor will have the shape (A, N, B, C). Etc.

For example:

```prettyprint

'x' is [1, 4]

'y' is [2, 5]

'z' is [3, 6]

pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]] ```

This is the opposite of unpack. The numpy equivalent is

tf.pack([x, y, z]) = np.asarray([x, y, z])

Args: values: A list of Tensor objects with the same shape and type. axis: An int. The axis to pack along. Defaults to the first dimension. Supports negative indexes. name: A name for this operation (optional).

Returns: output: A packed Tensor with the same type as values.

Raises: ValueError: If axis is out of the range [-(R+1), R+1).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pad(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.pad, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.pad

Return

Applicative

Origial documentation for Builder.pad

def pad(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.pad to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.pad

def pad(tensor, paddings, mode="CONSTANT", name=None)

Pads a tensor.

This operation pads a tensor according to the paddings you specify. paddings is an integer tensor with shape [n, 2], where n is the rank of tensor. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of tensor in that dimension, and paddings[D, 1] indicates how many values to add after the contents of tensor in that dimension. If mode is "REFLECT" then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D) - 1. If mode is "SYMMETRIC" then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D).

The padded size of each dimension D of the output is:

paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]

For example:

```python

't' is [[1, 2, 3], [4, 5, 6]].

'paddings' is [[1, 1,], [2, 2]].

rank of 't' is 2.

pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 2, 3, 0, 0], [0, 0, 4, 5, 6, 0, 0], [0, 0, 0, 0, 0, 0, 0]]

pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4], [3, 2, 1, 2, 3, 2, 1], [6, 5, 4, 5, 6, 5, 4], [3, 2, 1, 2, 3, 2, 1]]

pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2], [2, 1, 1, 2, 3, 3, 2], [5, 4, 4, 5, 6, 6, 5], [5, 4, 4, 5, 6, 6, 5]] ```

Args: tensor: A Tensor. paddings: A Tensor of type int32. mode: One of "CONSTANT", "REFLECT", or "SYMMETRIC". name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor.

Raises: ValueError: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC".

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pad_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.pad_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.pad_layer

Return

Applicative

Origial documentation for Builder.pad_layer

def pad_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.pad, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.pad

def pad(tensor, paddings, mode="CONSTANT", name=None):

Pads a tensor.

This operation pads a tensor according to the paddings you specify. paddings is an integer tensor with shape [n, 2], where n is the rank of tensor. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of tensor in that dimension, and paddings[D, 1] indicates how many values to add after the contents of tensor in that dimension. If mode is "REFLECT" then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D) - 1. If mode is "SYMMETRIC" then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D).

The padded size of each dimension D of the output is:

paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]

For example:

```python

't' is [[1, 2, 3], [4, 5, 6]].

'paddings' is [[1, 1,], [2, 2]].

rank of 't' is 2.

pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 2, 3, 0, 0], [0, 0, 4, 5, 6, 0, 0], [0, 0, 0, 0, 0, 0, 0]]

pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4], [3, 2, 1, 2, 3, 2, 1], [6, 5, 4, 5, 6, 5, 4], [3, 2, 1, 2, 3, 2, 1]]

pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2], [2, 1, 1, 2, 3, 3, 2], [5, 4, 4, 5, 6, 6, 5], [5, 4, 4, 5, 6, 6, 5]] ```

Args: tensor: A Tensor. paddings: A Tensor of type int32. mode: One of "CONSTANT", "REFLECT", or "SYMMETRIC". name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor.

Raises: ValueError: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC".

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_example(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_example, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_example

Return

Applicative

Origial documentation for Builder.parse_example

def parse_example(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.parse_example to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.parse_example

def parse_example(serialized, features, name=None, example_names=None)

Parses Example protos into a dict of tensors.

Parses a number of serialized [Example] (https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in serialized.

example_names may contain descriptive names for the corresponding serialized protos. These may be useful for debugging purposes, but they have no effect on the output. If not None, example_names must be the same length as serialized.

This op parses serialized examples into a dictionary mapping keys to Tensor and SparseTensor objects. features is a dict from keys to VarLenFeature and FixedLenFeature objects. Each VarLenFeature is mapped to a SparseTensor, and each FixedLenFeature is mapped to a Tensor.

Each VarLenFeature maps to a SparseTensor of the specified type representing a ragged matrix. Its indices are [batch, index] where batch is the batch entry the value is from in serialized, and index is the value's index in the list of values associated with that feature and example.

Each FixedLenFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(),) + df.shape.

FixedLenFeature entries with a default_value are optional. With no default value, we will fail if that Feature is missing from any example in serialized.

Examples:

For example, if one expects a tf.float32 sparse feature ft and three serialized Examples are provided:

serialized = [ features { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, features { feature []}, features { feature { key: "ft" value { float_list { value: [3.0] } } } ]

then the output will look like:

{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], values=[1.0, 2.0, 3.0], shape=(3, 2)) }

Given two Example input protos in serialized:

[ features { feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } feature { key: "gps" value { float_list { value: [] } } } }, features { feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } feature { key: "dank" value { int64_list { value: [ 42 ] } } } feature { key: "gps" value { } } } ]

And arguments

example_names: ["input0", "input1"], features: { "kw": VarLenFeature(tf.string), "dank": VarLenFeature(tf.int64), "gps": VarLenFeature(tf.float32), }

Then the output is a dictionary:

python { "kw": SparseTensor( indices=[[0, 0], [0, 1], [1, 0]], values=["knit", "big", "emmy"] shape=[2, 2]), "dank": SparseTensor( indices=[[1, 0]], values=[42], shape=[2, 1]), "gps": SparseTensor( indices=[], values=[], shape=[2, 0]), }

For dense results in two serialized Examples:

[ features { feature { key: "age" value { int64_list { value: [ 0 ] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } }, features { feature { key: "age" value { int64_list { value: [] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } } ]

We can use arguments:

example_names: ["input0", "input1"], features: { "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), "gender": FixedLenFeature([], dtype=tf.string), }

And the expected output is:

python { "age": [[0], [-1]], "gender": [["f"], ["f"]], }

Args: serialized: A vector (1-D Tensor) of strings, a batch of binary serialized Example protos. features: A dict mapping feature keys to FixedLenFeature or VarLenFeature values. name: A name for this operation (optional). example_names: A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch.

Returns: A dict mapping feature keys to Tensor and SparseTensor values.

Raises: ValueError: if any feature is invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_example_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_example_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_example_layer

Return

Applicative

Origial documentation for Builder.parse_example_layer

def parse_example_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.parse_example, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.parse_example

def parse_example(serialized, features, name=None, example_names=None):

Parses Example protos into a dict of tensors.

Parses a number of serialized [Example] (https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in serialized.

example_names may contain descriptive names for the corresponding serialized protos. These may be useful for debugging purposes, but they have no effect on the output. If not None, example_names must be the same length as serialized.

This op parses serialized examples into a dictionary mapping keys to Tensor and SparseTensor objects. features is a dict from keys to VarLenFeature and FixedLenFeature objects. Each VarLenFeature is mapped to a SparseTensor, and each FixedLenFeature is mapped to a Tensor.

Each VarLenFeature maps to a SparseTensor of the specified type representing a ragged matrix. Its indices are [batch, index] where batch is the batch entry the value is from in serialized, and index is the value's index in the list of values associated with that feature and example.

Each FixedLenFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(),) + df.shape.

FixedLenFeature entries with a default_value are optional. With no default value, we will fail if that Feature is missing from any example in serialized.

Examples:

For example, if one expects a tf.float32 sparse feature ft and three serialized Examples are provided:

serialized = [ features { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, features { feature []}, features { feature { key: "ft" value { float_list { value: [3.0] } } } ]

then the output will look like:

{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], values=[1.0, 2.0, 3.0], shape=(3, 2)) }

Given two Example input protos in serialized:

[ features { feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } feature { key: "gps" value { float_list { value: [] } } } }, features { feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } feature { key: "dank" value { int64_list { value: [ 42 ] } } } feature { key: "gps" value { } } } ]

And arguments

example_names: ["input0", "input1"], features: { "kw": VarLenFeature(tf.string), "dank": VarLenFeature(tf.int64), "gps": VarLenFeature(tf.float32), }

Then the output is a dictionary:

python { "kw": SparseTensor( indices=[[0, 0], [0, 1], [1, 0]], values=["knit", "big", "emmy"] shape=[2, 2]), "dank": SparseTensor( indices=[[1, 0]], values=[42], shape=[2, 1]), "gps": SparseTensor( indices=[], values=[], shape=[2, 0]), }

For dense results in two serialized Examples:

[ features { feature { key: "age" value { int64_list { value: [ 0 ] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } }, features { feature { key: "age" value { int64_list { value: [] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } } ]

We can use arguments:

example_names: ["input0", "input1"], features: { "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), "gender": FixedLenFeature([], dtype=tf.string), }

And the expected output is:

python { "age": [[0], [-1]], "gender": [["f"], ["f"]], }

Args: serialized: A vector (1-D Tensor) of strings, a batch of binary serialized Example protos. features: A dict mapping feature keys to FixedLenFeature or VarLenFeature values. name: A name for this operation (optional). example_names: A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch.

Returns: A dict mapping feature keys to Tensor and SparseTensor values.

Raises: ValueError: if any feature is invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_single_example(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_single_example, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_single_example

Return

Applicative

Origial documentation for Builder.parse_single_example

def parse_single_example(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.parse_single_example to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.parse_single_example

def parse_single_example(serialized, features, name=None, example_names=None)

Parses a single Example proto.

Similar to parse_example, except:

For dense tensors, the returned Tensor is identical to the output of parse_example, except there is no batch dimension, the output shape is the same as the shape given in dense_shape.

For SparseTensors, the first (batch) column of the indices matrix is removed (the indices matrix is a column vector), the values vector is unchanged, and the first (batch_size) entry of the shape vector is removed (it is now a single element vector).

Args: serialized: A scalar string Tensor, a single serialized Example. See _parse_single_example_raw documentation for more details. features: A dict mapping feature keys to FixedLenFeature or VarLenFeature values. name: A name for this operation (optional). example_names: (Optional) A scalar string Tensor, the associated name. See _parse_single_example_raw documentation for more details.

Returns: A dict mapping feature keys to Tensor and SparseTensor values.

Raises: ValueError: if any feature is invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_single_example_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_single_example_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_single_example_layer

Return

Applicative

Origial documentation for Builder.parse_single_example_layer

def parse_single_example_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.parse_single_example, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.parse_single_example

def parse_single_example(serialized, features, name=None, example_names=None):

Parses a single Example proto.

Similar to parse_example, except:

For dense tensors, the returned Tensor is identical to the output of parse_example, except there is no batch dimension, the output shape is the same as the shape given in dense_shape.

For SparseTensors, the first (batch) column of the indices matrix is removed (the indices matrix is a column vector), the values vector is unchanged, and the first (batch_size) entry of the shape vector is removed (it is now a single element vector).

Args: serialized: A scalar string Tensor, a single serialized Example. See _parse_single_example_raw documentation for more details. features: A dict mapping feature keys to FixedLenFeature or VarLenFeature values. name: A name for this operation (optional). example_names: (Optional) A scalar string Tensor, the associated name. See _parse_single_example_raw documentation for more details.

Returns: A dict mapping feature keys to Tensor and SparseTensor values.

Raises: ValueError: if any feature is invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_single_sequence_example(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_single_sequence_example, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_single_sequence_example

Return

Applicative

Origial documentation for Builder.parse_single_sequence_example

def parse_single_sequence_example(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.parse_single_sequence_example to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.parse_single_sequence_example

def parse_single_sequence_example(serialized, context_features=None, sequence_features=None, example_name=None, name=None)

Parses a single SequenceExample proto.

Parses a single serialized [SequenceExample] (https://www.tensorflow.org/code/tensorflow/core/example/example.proto) proto given in serialized.

This op parses a serialize sequence example into a tuple of dictionaries mapping keys to Tensor and SparseTensor objects respectively. The first dictionary contains mappings for keys appearing in context_features, and the second dictionary contains mappings for keys appearing in sequence_features.

At least one of context_features and sequence_features must be provided and non-empty.

The context_features keys are associated with a SequenceExample as a whole, independent of time / frame. In contrast, the sequence_features keys provide a way to access variable-length data within the FeatureList section of the SequenceExample proto. While the shapes of context_features values are fixed with respect to frame, the frame dimension (the first dimension) of sequence_features values may vary between SequenceExample protos, and even between feature_list keys within the same SequenceExample.

context_features contains VarLenFeature and FixedLenFeature objects. Each VarLenFeature is mapped to a SparseTensor, and each FixedLenFeature is mapped to a Tensor, of the specified type, shape, and default value.

sequence_features contains VarLenFeature and FixedLenSequenceFeature objects. Each VarLenFeature is mapped to a SparseTensor, and each FixedLenSequenceFeature is mapped to a Tensor, each of the specified type. The shape will be (T,) + df.shape for FixedLenSequenceFeature df, where T is the length of the associated FeatureList in the SequenceExample. For instance, FixedLenSequenceFeature([]) yields a scalar 1-D Tensor of static shape [None] and dynamic shape [T], while FixedLenSequenceFeature([k]) (for int k >= 1) yields a 2-D matrix Tensor of static shape [None, k] and dynamic shape [T, k].

Each SparseTensor corresponding to sequence_features represents a ragged vector. Its indices are [time, index], where time is the FeatureList entry and index is the value's index in the list of values associated with that time.

FixedLenFeature entries with a default_value and FixedLenSequenceFeature entries with allow_missing=True are optional; otherwise, we will fail if that Feature or FeatureList is missing from any example in serialized.

example_name may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not None, example_name must be a scalar.

Args: serialized: A scalar (0-D Tensor) of type string, a single binary serialized SequenceExample proto. context_features: A dict mapping feature keys to FixedLenFeature or VarLenFeature values. These features are associated with a SequenceExample as a whole. sequence_features: A dict mapping feature keys to FixedLenSequenceFeature or VarLenFeature values. These features are associated with data within the FeatureList section of the SequenceExample proto. example_name: A scalar (0-D Tensor) of strings (optional), the name of the serialized proto. name: A name for this operation (optional).

Returns: A tuple of two dicts, each mapping keys to Tensors and SparseTensors. The first dict contains the context key/values. The second dict contains the feature_list key/values.

Raises: ValueError: if any feature is invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_single_sequence_example_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_single_sequence_example_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_single_sequence_example_layer

Return

Applicative

Origial documentation for Builder.parse_single_sequence_example_layer

def parse_single_sequence_example_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.parse_single_sequence_example, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.parse_single_sequence_example

def parse_single_sequence_example(serialized, context_features=None, sequence_features=None, example_name=None, name=None):

Parses a single SequenceExample proto.

Parses a single serialized [SequenceExample] (https://www.tensorflow.org/code/tensorflow/core/example/example.proto) proto given in serialized.

This op parses a serialize sequence example into a tuple of dictionaries mapping keys to Tensor and SparseTensor objects respectively. The first dictionary contains mappings for keys appearing in context_features, and the second dictionary contains mappings for keys appearing in sequence_features.

At least one of context_features and sequence_features must be provided and non-empty.

The context_features keys are associated with a SequenceExample as a whole, independent of time / frame. In contrast, the sequence_features keys provide a way to access variable-length data within the FeatureList section of the SequenceExample proto. While the shapes of context_features values are fixed with respect to frame, the frame dimension (the first dimension) of sequence_features values may vary between SequenceExample protos, and even between feature_list keys within the same SequenceExample.

context_features contains VarLenFeature and FixedLenFeature objects. Each VarLenFeature is mapped to a SparseTensor, and each FixedLenFeature is mapped to a Tensor, of the specified type, shape, and default value.

sequence_features contains VarLenFeature and FixedLenSequenceFeature objects. Each VarLenFeature is mapped to a SparseTensor, and each FixedLenSequenceFeature is mapped to a Tensor, each of the specified type. The shape will be (T,) + df.shape for FixedLenSequenceFeature df, where T is the length of the associated FeatureList in the SequenceExample. For instance, FixedLenSequenceFeature([]) yields a scalar 1-D Tensor of static shape [None] and dynamic shape [T], while FixedLenSequenceFeature([k]) (for int k >= 1) yields a 2-D matrix Tensor of static shape [None, k] and dynamic shape [T, k].

Each SparseTensor corresponding to sequence_features represents a ragged vector. Its indices are [time, index], where time is the FeatureList entry and index is the value's index in the list of values associated with that time.

FixedLenFeature entries with a default_value and FixedLenSequenceFeature entries with allow_missing=True are optional; otherwise, we will fail if that Feature or FeatureList is missing from any example in serialized.

example_name may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not None, example_name must be a scalar.

Args: serialized: A scalar (0-D Tensor) of type string, a single binary serialized SequenceExample proto. context_features: A dict mapping feature keys to FixedLenFeature or VarLenFeature values. These features are associated with a SequenceExample as a whole. sequence_features: A dict mapping feature keys to FixedLenSequenceFeature or VarLenFeature values. These features are associated with data within the FeatureList section of the SequenceExample proto. example_name: A scalar (0-D Tensor) of strings (optional), the name of the serialized proto. name: A name for this operation (optional).

Returns: A tuple of two dicts, each mapping keys to Tensors and SparseTensors. The first dict contains the context key/values. The second dict contains the feature_list key/values.

Raises: ValueError: if any feature is invalid.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_tensor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_tensor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_tensor

Return

Applicative

Origial documentation for Builder.parse_tensor

def parse_tensor(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.parse_tensor to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.parse_tensor

def parse_tensor(serialized, out_type, name=None)

Transforms a serialized tensorflow.TensorProto proto into a Tensor.

Args: serialized: A Tensor of type string. A scalar string containing a serialized TensorProto proto. out_type: A tf.DType. The type of the serialized tensor. The provided type must match the type of the serialized tensor and no implicit conversion will take place. name: A name for the operation (optional).

Returns: A Tensor of type out_type. A Tensor of type out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def parse_tensor_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.parse_tensor_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.parse_tensor_layer

Return

Applicative

Origial documentation for Builder.parse_tensor_layer

def parse_tensor_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.parse_tensor, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.parse_tensor

def parse_tensor(serialized, out_type, name=None):

Transforms a serialized tensorflow.TensorProto proto into a Tensor.

Args: serialized: A Tensor of type string. A scalar string containing a serialized TensorProto proto. out_type: A tf.DType. The type of the serialized tensor. The provided type must match the type of the serialized tensor and no implicit conversion will take place. name: A name for the operation (optional).

Returns: A Tensor of type out_type. A Tensor of type out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pipe(

self, builder, *ast)

pipe takes in a builder of type Builder, BuilderTree or Tensor preferably and an object ast which must be part of the domain of the DSL, and compiles ast to a function of type Builder -> Builder and applies it to the input builder. All *args after builder are taken as a tuple, therefore, it makes it easier to define an initial tuple () element to define a sequential operation.

Arguments

  • builder: a Builder, BuilderTree or Tensor preferably.
  • *ast: a sequence of elements of the DSL.

Return

An object with the result of the computation, probable types: Tensor | Builder | BuilderTree | list(Tensor) |

Examples

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = tb.pipe(
    x,
    [
        { tf.device("/gpu:0"):
            tb.relu_layer(20)
        }
    ,
        { tf.device("/gpu:1"):
            tb.sigmoid_layer(20)
        }
    ,
        { tf.device("/cpu:0"):
            tb.tanh_layer(20)
        }
    ],
    tb.relu_layer(10)
    .tensor()
)
def pipe(self, builder, *ast):
    """
    `pipe` takes in a `builder` of type `Builder`, `BuilderTree` or `Tensor` preferably and an object `ast` which must be part of the domain of the DSL, and compiles `ast` to a function of type `Builder -> Builder` and applies it to the input `builder`. All \*args after `builder` are taken as a tuple, therefore, it makes it easier to define an initial tuple `()` element to define a sequential operation.
    **Arguments**
    * `builder`: a `Builder`, `BuilderTree` or `Tensor` preferably.
    * `*ast`: a sequence of elements of the DSL.
    **Return**
    An object with the result of the computation, probable types: `Tensor | Builder | BuilderTree | list(Tensor) |  `
    **Examples**
        import tensorflow as tf
        from tensorbuilder import tb
        x = placeholder(tf.float32, shape=[None, 10])
        h = tb.pipe(
            x,
            [
                { tf.device("/gpu:0"):
                    tb.relu_layer(20)
                }
            ,
                { tf.device("/gpu:1"):
                    tb.sigmoid_layer(20)
                }
            ,
                { tf.device("/cpu:0"):
                    tb.tanh_layer(20)
                }
            ],
            tb.relu_layer(10)
            .tensor()
        )
    """
    f = _compile(ast)
    #if the input is a Tensor, create a Builder
    if type(builder) is tf.Tensor or type(builder) is tf.Variable:
        builder = self.Builder(builder)
    return f(builder)

def placeholder(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.placeholder, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.placeholder

Return

Applicative

Origial documentation for Builder.placeholder

def placeholder(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.placeholder to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.placeholder

def placeholder(dtype, shape=None, name=None)

Inserts a placeholder for a tensor that will be always fed.

Important: This tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run().

For example:

```python x = tf.placeholder(tf.float32, shape=(1024, 1024)) y = tf.matmul(x, x)

with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed.

rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. ```

Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional).

Returns: A Tensor that may be used as a handle for feeding a value, but not evaluated directly.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def placeholder_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.placeholder_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.placeholder_layer

Return

Applicative

Origial documentation for Builder.placeholder_layer

def placeholder_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.placeholder, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.placeholder

def placeholder(dtype, shape=None, name=None):

Inserts a placeholder for a tensor that will be always fed.

Important: This tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run().

For example:

```python x = tf.placeholder(tf.float32, shape=(1024, 1024)) y = tf.matmul(x, x)

with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed.

rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. ```

Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional).

Returns: A Tensor that may be used as a handle for feeding a value, but not evaluated directly.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def placeholder_with_default(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.placeholder_with_default, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.placeholder_with_default

Return

Applicative

Origial documentation for Builder.placeholder_with_default

def placeholder_with_default(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.placeholder_with_default to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.placeholder_with_default

def placeholder_with_default(input, shape, name=None)

A placeholder op that passes though input when its output is not fed.

Args: input: A Tensor. The default value to produce when output is not fed. shape: A tf.TensorShape or list of ints. The (possibly partial) shape of the tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. A placeholder tensor that defaults to input if it is not fed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def placeholder_with_default_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.placeholder_with_default_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.placeholder_with_default_layer

Return

Applicative

Origial documentation for Builder.placeholder_with_default_layer

def placeholder_with_default_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.placeholder_with_default, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.placeholder_with_default

def placeholder_with_default(input, shape, name=None):

A placeholder op that passes though input when its output is not fed.

Args: input: A Tensor. The default value to produce when output is not fed. shape: A tf.TensorShape or list of ints. The (possibly partial) shape of the tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. A placeholder tensor that defaults to input if it is not fed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def polygamma(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.polygamma, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.polygamma

Return

Applicative

Origial documentation for Builder.polygamma

def polygamma(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.polygamma to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.polygamma

def polygamma(a, x, name=None)

Compute the polygamma function \(\psi^{(n)}(x)\).

The polygamma function is defined as:

\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x) where \(\psi(x)\) is the digamma function.

Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def polygamma_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.polygamma_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.polygamma_layer

Return

Applicative

Origial documentation for Builder.polygamma_layer

def polygamma_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.polygamma, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.polygamma

def polygamma(a, x, name=None):

Compute the polygamma function \(\psi^{(n)}(x)\).

The polygamma function is defined as:

\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x) where \(\psi(x)\) is the digamma function.

Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as a.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def polynomial_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.polynomial_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.polynomial_layer

Return

Applicative

Origial documentation for Builder.polynomial_layer

def polynomial_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method the same as tensorbuilder.Builder.polynomial_layer.

Original Documentation for tensorbuilder.Builder.polynomial_layer

def polynomial_layer(builder, size)

Creates a fully connected layer of size size and then applies the activation function y(i) = z(i)^(i+1) where z = w*x + b

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pow(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.pow, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.pow

Return

Applicative

Origial documentation for Builder.pow

def pow(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.pow to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.pow

def pow(x, y, name=None)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

```

tensor 'x' is [[2, 2], [3, 3]]

tensor 'y' is [[8, 16], [2, 3]]

tf.pow(x, y) ==> [[256, 65536], [9, 27]] ```

Args: x: A Tensor of type float32, float64, int32, int64, complex64, or complex128. y: A Tensor of type float32, float64, int32, int64, complex64, or complex128. name: A name for the operation (optional).

Returns: A Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def pow_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.pow_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.pow_layer

Return

Applicative

Origial documentation for Builder.pow_layer

def pow_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.pow, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.pow

def pow(x, y, name=None):

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

```

tensor 'x' is [[2, 2], [3, 3]]

tensor 'y' is [[8, 16], [2, 3]]

tf.pow(x, y) ==> [[256, 65536], [9, 27]] ```

Args: x: A Tensor of type float32, float64, int32, int64, complex64, or complex128. y: A Tensor of type float32, float64, int32, int64, complex64, or complex128. name: A name for the operation (optional).

Returns: A Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def py_func(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.py_func, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.py_func

Return

Applicative

Origial documentation for Builder.py_func

def py_func(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.py_func to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.py_func

def py_func(func, inp, Tout, stateful=True, name=None)

Wraps a python function and uses it as a tensorflow op.

Given a python function func, which takes numpy arrays as its inputs and returns numpy arrays as its outputs. E.g.,

python def my_func(x): # x will be a numpy array with the contents of the placeholder below return np.sinh(x) inp = tf.placeholder(tf.float32, [...]) y = py_func(my_func, [inp], [tf.float32])

The above snippet constructs a tf graph which invokes a numpy sinh(x) as an op in the graph.

Args: func: A python function. inp: A list of Tensor. Tout: A list or tuple of tensorflow data types or a single tensorflow data type if there is only one, indicating what func returns. stateful: A boolean indicating whether the function should be considered stateful or stateless. I.e. whether it, given the same input, will return the same output and at the same time does not change state in an observable way. Optimizations such as common subexpression elimination are only possible when operations are stateless. name: A name for the operation (optional).

Returns: A list of Tensor or a single Tensor which func computes.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def py_func_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.py_func_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.py_func_layer

Return

Applicative

Origial documentation for Builder.py_func_layer

def py_func_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.py_func, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.py_func

def py_func(func, inp, Tout, stateful=True, name=None):

Wraps a python function and uses it as a tensorflow op.

Given a python function func, which takes numpy arrays as its inputs and returns numpy arrays as its outputs. E.g.,

python def my_func(x): # x will be a numpy array with the contents of the placeholder below return np.sinh(x) inp = tf.placeholder(tf.float32, [...]) y = py_func(my_func, [inp], [tf.float32])

The above snippet constructs a tf graph which invokes a numpy sinh(x) as an op in the graph.

Args: func: A python function. inp: A list of Tensor. Tout: A list or tuple of tensorflow data types or a single tensorflow data type if there is only one, indicating what func returns. stateful: A boolean indicating whether the function should be considered stateful or stateless. I.e. whether it, given the same input, will return the same output and at the same time does not change state in an observable way. Optimizations such as common subexpression elimination are only possible when operations are stateless. name: A name for the operation (optional).

Returns: A list of Tensor or a single Tensor which func computes.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_crop(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_crop, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_crop

Return

Applicative

Origial documentation for Builder.random_crop

def random_crop(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_crop to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_crop

def random_crop(value, size, seed=None, name=None)

Randomly crops a tensor to a given size.

Slices a shape size portion out of value at a uniformly chosen offset. Requires value.shape >= size.

If a dimension should not be cropped, pass the full size of that dimension. For example, RGB images can be cropped with size = [crop_height, crop_width, 3].

Args: value: Input tensor to crop. size: 1-D tensor with size the rank of value. seed: Python integer. Used to create a random seed. See set_random_seed for behavior. name: A name for this operation (optional).

Returns: A cropped tensor of the same rank as value and shape size.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_crop_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_crop_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_crop_layer

Return

Applicative

Origial documentation for Builder.random_crop_layer

def random_crop_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_crop, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_crop

def random_crop(value, size, seed=None, name=None):

Randomly crops a tensor to a given size.

Slices a shape size portion out of value at a uniformly chosen offset. Requires value.shape >= size.

If a dimension should not be cropped, pass the full size of that dimension. For example, RGB images can be cropped with size = [crop_height, crop_width, 3].

Args: value: Input tensor to crop. size: 1-D tensor with size the rank of value. seed: Python integer. Used to create a random seed. See set_random_seed for behavior. name: A name for this operation (optional).

Returns: A cropped tensor of the same rank as value and shape size.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_gamma(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_gamma, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_gamma

Return

Applicative

Origial documentation for Builder.random_gamma

def random_gamma(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_gamma to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_gamma

def random_gamma(shape, alpha, beta=None, dtype=<dtype: 'float32'>, seed=None, name=None)

Draws shape samples from each of the given Gamma distribution(s).

alpha is the shape parameter describing the distribution(s), and beta is the inverse scale parameter(s).

Example:

samples = tf.random_gamma([10], [0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution

samples = tf.random_gamma([7, 5], [0.5, 1.5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions

samples = tf.random_gamma([30], [[1.],[3.],[5.]], beta=[[3., 4.]]) # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.

Note that for small alpha values, there is a chance you will draw a value of exactly 0, which gets worse for lower-precision dtypes, even though zero is not in the support of the gamma distribution.

Relevant cdfs (~chance you will draw a exactly-0 value): stats.gamma(.01).cdf(np.finfo(np.float16).tiny) 0.91269738769897879 stats.gamma(.01).cdf(np.finfo(np.float32).tiny) 0.41992668622045726 stats.gamma(.01).cdf(np.finfo(np.float64).tiny) 0.00084322740680686662 stats.gamma(.35).cdf(np.finfo(np.float16).tiny) 0.037583276135263931 stats.gamma(.35).cdf(np.finfo(np.float32).tiny) 5.9514895726818067e-14 stats.gamma(.35).cdf(np.finfo(np.float64).tiny) 2.3529843400647272e-108

Args: shape: A 1-D integer Tensor or Python array. The shape of the output samples to be drawn per alpha/beta-parameterized distribution. alpha: A Tensor or Python value or N-D array of type dtype. alpha provides the shape parameter(s) describing the gamma distribution(s) to sample. Must be broadcastable with beta. beta: A Tensor or Python value or N-D array of type dtype. Defaults to 1. beta provides the inverse scale parameter(s) of the gamma distribution(s) to sample. Must be broadcastable with alpha. dtype: The type of alpha, beta, and the output: float16, float32, or float64. seed: A Python integer. Used to create a random seed for the distributions. See set_random_seed for behavior. name: Optional name for the operation.

Returns: samples: a Tensor of shape tf.concat(shape, tf.shape(alpha + beta)) with values of type dtype.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_gamma_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_gamma_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_gamma_layer

Return

Applicative

Origial documentation for Builder.random_gamma_layer

def random_gamma_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_gamma, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_gamma

def random_gamma(shape, alpha, beta=None, dtype=<dtype: 'float32'>, seed=None, name=None):

Draws shape samples from each of the given Gamma distribution(s).

alpha is the shape parameter describing the distribution(s), and beta is the inverse scale parameter(s).

Example:

samples = tf.random_gamma([10], [0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution

samples = tf.random_gamma([7, 5], [0.5, 1.5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions

samples = tf.random_gamma([30], [[1.],[3.],[5.]], beta=[[3., 4.]]) # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.

Note that for small alpha values, there is a chance you will draw a value of exactly 0, which gets worse for lower-precision dtypes, even though zero is not in the support of the gamma distribution.

Relevant cdfs (~chance you will draw a exactly-0 value): stats.gamma(.01).cdf(np.finfo(np.float16).tiny) 0.91269738769897879 stats.gamma(.01).cdf(np.finfo(np.float32).tiny) 0.41992668622045726 stats.gamma(.01).cdf(np.finfo(np.float64).tiny) 0.00084322740680686662 stats.gamma(.35).cdf(np.finfo(np.float16).tiny) 0.037583276135263931 stats.gamma(.35).cdf(np.finfo(np.float32).tiny) 5.9514895726818067e-14 stats.gamma(.35).cdf(np.finfo(np.float64).tiny) 2.3529843400647272e-108

Args: shape: A 1-D integer Tensor or Python array. The shape of the output samples to be drawn per alpha/beta-parameterized distribution. alpha: A Tensor or Python value or N-D array of type dtype. alpha provides the shape parameter(s) describing the gamma distribution(s) to sample. Must be broadcastable with beta. beta: A Tensor or Python value or N-D array of type dtype. Defaults to 1. beta provides the inverse scale parameter(s) of the gamma distribution(s) to sample. Must be broadcastable with alpha. dtype: The type of alpha, beta, and the output: float16, float32, or float64. seed: A Python integer. Used to create a random seed for the distributions. See set_random_seed for behavior. name: Optional name for the operation.

Returns: samples: a Tensor of shape tf.concat(shape, tf.shape(alpha + beta)) with values of type dtype.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_normal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_normal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_normal

Return

Applicative

Origial documentation for Builder.random_normal

def random_normal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_normal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_normal

def random_normal(shape, mean=0.0, stddev=1.0, dtype=<dtype: 'float32'>, seed=None, name=None)

Outputs random values from a normal distribution.

Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of the specified shape filled with random normal values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_normal_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_normal_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_normal_initializer

Return

Applicative

Origial documentation for Builder.random_normal_initializer

def random_normal_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_normal_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_normal_initializer

def random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=<dtype: 'float32'>)

Returns an initializer that generates tensors with a normal distribution.

Args: mean: a python scalar or a scalar tensor. Mean of the random values to generate. stddev: a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type. Only floating point types are supported.

Returns: An initializer that generates tensors with a normal distribution.

Raises: ValueError: if dtype is not a floating point type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_normal_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_normal_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_normal_initializer_layer

Return

Applicative

Origial documentation for Builder.random_normal_initializer_layer

def random_normal_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_normal_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_normal_initializer

def random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=<dtype: 'float32'>):

Returns an initializer that generates tensors with a normal distribution.

Args: mean: a python scalar or a scalar tensor. Mean of the random values to generate. stddev: a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type. Only floating point types are supported.

Returns: An initializer that generates tensors with a normal distribution.

Raises: ValueError: if dtype is not a floating point type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_normal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_normal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_normal_layer

Return

Applicative

Origial documentation for Builder.random_normal_layer

def random_normal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_normal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_normal

def random_normal(shape, mean=0.0, stddev=1.0, dtype=<dtype: 'float32'>, seed=None, name=None):

Outputs random values from a normal distribution.

Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of the specified shape filled with random normal values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_shuffle(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_shuffle, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_shuffle

Return

Applicative

Origial documentation for Builder.random_shuffle

def random_shuffle(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_shuffle to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_shuffle

def random_shuffle(value, seed=None, name=None)

Randomly shuffles a tensor along its first dimension.

The tensor is shuffled along dimension 0, such that each value[j] is mapped to one and only one output[i]. For example, a mapping that might occur for a 3x2 tensor is:

python [[1, 2], [[5, 6], [3, 4], ==> [1, 2], [5, 6]] [3, 4]]

Args: value: A Tensor to be shuffled. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of same shape and type as value, shuffled along its first dimension.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_shuffle_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_shuffle_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_shuffle_layer

Return

Applicative

Origial documentation for Builder.random_shuffle_layer

def random_shuffle_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_shuffle, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_shuffle

def random_shuffle(value, seed=None, name=None):

Randomly shuffles a tensor along its first dimension.

The tensor is shuffled along dimension 0, such that each value[j] is mapped to one and only one output[i]. For example, a mapping that might occur for a 3x2 tensor is:

python [[1, 2], [[5, 6], [3, 4], ==> [1, 2], [5, 6]] [3, 4]]

Args: value: A Tensor to be shuffled. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of same shape and type as value, shuffled along its first dimension.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_uniform(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_uniform, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_uniform

Return

Applicative

Origial documentation for Builder.random_uniform

def random_uniform(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_uniform to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_uniform

def random_uniform(shape, minval=0, maxval=None, dtype=<dtype: 'float32'>, seed=None, name=None)

Outputs random values from a uniform distribution.

The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.

For floats, the default range is [0, 1). For ints, at least maxval must be specified explicitly.

In the integer case, the random integers are slightly biased unless maxval - minval is an exact power of two. The bias is small for values of maxval - minval significantly smaller than the range of the output (either 2**32 or 2**64).

Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0. maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. dtype: The type of the output: float32, float64, int32, or int64. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of the specified shape filled with random uniform values.

Raises: ValueError: If dtype is integral and maxval is not specified.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_uniform_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_uniform_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_uniform_initializer

Return

Applicative

Origial documentation for Builder.random_uniform_initializer

def random_uniform_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.random_uniform_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.random_uniform_initializer

def random_uniform_initializer(minval=0, maxval=None, seed=None, dtype=<dtype: 'float32'>)

Returns an initializer that generates tensors with a uniform distribution.

Args: minval: A python scalar or a scalar tensor. Lower bound of the range of random values to generate. maxval: A python scalar or a scalar tensor. Upper bound of the range of random values to generate. Defaults to 1 for float types. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type.

Returns: An initializer that generates tensors with a uniform distribution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_uniform_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_uniform_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_uniform_initializer_layer

Return

Applicative

Origial documentation for Builder.random_uniform_initializer_layer

def random_uniform_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_uniform_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_uniform_initializer

def random_uniform_initializer(minval=0, maxval=None, seed=None, dtype=<dtype: 'float32'>):

Returns an initializer that generates tensors with a uniform distribution.

Args: minval: A python scalar or a scalar tensor. Lower bound of the range of random values to generate. maxval: A python scalar or a scalar tensor. Upper bound of the range of random values to generate. Defaults to 1 for float types. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type.

Returns: An initializer that generates tensors with a uniform distribution.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def random_uniform_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.random_uniform_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.random_uniform_layer

Return

Applicative

Origial documentation for Builder.random_uniform_layer

def random_uniform_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.random_uniform, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.random_uniform

def random_uniform(shape, minval=0, maxval=None, dtype=<dtype: 'float32'>, seed=None, name=None):

Outputs random values from a uniform distribution.

The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.

For floats, the default range is [0, 1). For ints, at least maxval must be specified explicitly.

In the integer case, the random integers are slightly biased unless maxval - minval is an exact power of two. The bias is small for values of maxval - minval significantly smaller than the range of the output (either 2**32 or 2**64).

Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0. maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. dtype: The type of the output: float32, float64, int32, or int64. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of the specified shape filled with random uniform values.

Raises: ValueError: If dtype is integral and maxval is not specified.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def range(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.range, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.range

Return

Applicative

Origial documentation for Builder.range

def range(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.range to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.range

def range(start, limit=None, delta=1, name="range")

Creates a sequence of integers.

Creates a sequence of integers that begins at start and extends by increments of delta up to but not including limit.

Like the Python builtin range, start defaults to 0, so that range(n) = range(0, n).

For example:

```

'start' is 3

'limit' is 18

'delta' is 3

tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]

'limit' is 5

tf.range(limit) ==> [0, 1, 2, 3, 4] ```

Args: start: A 0-D (scalar) of type int32. Acts as first entry in the range if limit is not None; otherwise, acts as range limit and first entry defaults to 0. limit: A 0-D (scalar) of type int32. Upper limit of sequence, exclusive. If None, defaults to the value of start while the first entry of the range defaults to 0. delta: A 0-D Tensor (scalar) of type int32. Number that increments start. Defaults to 1. name: A name for the operation. Defaults to "range".

Returns: An 1-D int32 Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def range_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.range_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.range_layer

Return

Applicative

Origial documentation for Builder.range_layer

def range_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.range, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.range

def range(start, limit=None, delta=1, name="range"):

Creates a sequence of integers.

Creates a sequence of integers that begins at start and extends by increments of delta up to but not including limit.

Like the Python builtin range, start defaults to 0, so that range(n) = range(0, n).

For example:

```

'start' is 3

'limit' is 18

'delta' is 3

tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]

'limit' is 5

tf.range(limit) ==> [0, 1, 2, 3, 4] ```

Args: start: A 0-D (scalar) of type int32. Acts as first entry in the range if limit is not None; otherwise, acts as range limit and first entry defaults to 0. limit: A 0-D (scalar) of type int32. Upper limit of sequence, exclusive. If None, defaults to the value of start while the first entry of the range defaults to 0. delta: A 0-D Tensor (scalar) of type int32. Number that increments start. Defaults to 1. name: A name for the operation. Defaults to "range".

Returns: An 1-D int32 Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rank(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.rank, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.rank

Return

Applicative

Origial documentation for Builder.rank

def rank(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.rank to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.rank

def rank(input, name=None)

Returns the rank of a tensor.

This operation returns an integer representing the rank of input.

For example:

```python

't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]

shape of tensor 't' is [2, 2, 3]

rank(t) ==> 3 ```

Note: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as "order", "degree", or "ndims."

Args: input: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor of type int32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rank_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.rank_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.rank_layer

Return

Applicative

Origial documentation for Builder.rank_layer

def rank_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.rank, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.rank

def rank(input, name=None):

Returns the rank of a tensor.

This operation returns an integer representing the rank of input.

For example:

```python

't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]

shape of tensor 't' is [2, 2, 3]

rank(t) ==> 3 ```

Note: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as "order", "degree", or "ndims."

Args: input: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor of type int32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def raw_rnn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.raw_rnn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.raw_rnn

Return

Applicative

Origial documentation for Builder.raw_rnn

def raw_rnn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.raw_rnn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.raw_rnn

def raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None)

Creates an RNN specified by RNNCell cell and loop function loop_fn.

NOTE: This method is still in testing, and the API may change.

This function is a more primitive version of dynamic_rnn that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output.

For example, it can be used to implement the dynamic decoder of a seq2seq model.

Instead of working with Tensor objects, most operations work with TensorArray objects directly.

The operation of raw_rnn, in pseudo-code, is basically the following:

time = tf.constant(0, dtype=tf.int32) (finished, next_input, initial_state, _, loop_state) = loop_fn( time=time, cell_output=None, cell_state=None, loop_state=None) emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype) state = initial_state while not all(finished): (output, cell_state) = cell(next_input, state) (next_finished, next_input, next_state, emit, loop_state) = loop_fn( time=time + 1, cell_output=output, cell_state=cell_state, loop_state=loop_state) # Emit zeros and copy forward state for minibatch entries that are finished. state = tf.select(finished, state, next_state) emit = tf.select(finished, tf.zeros_like(emit), emit) emit_ta = emit_ta.write(time, emit) # If any new minibatch entries are marked as finished, mark these finished = tf.logical_or(finished, next_finished) time += 1 return (emit_ta, state, loop_state)

with the additional properties that output and state may be (possibly nested) tuples, as determined by cell.output_size and cell.state_size, and as a result the final state and emit_ta may themselves be tuples.

A simple implementation of dynamic_rnn via raw_rnn looks like this:

```python inputs = tf.placeholder(shape=(max_time, batch_size, input_depth), dtype=tf.float32) sequence_length = tf.placeholder(shape=(batch_size,), dtype=tf.int32) inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time) inputs_ta = inputs_ta.unpack(inputs)

cell = tf.nn.rnn_cell.LSTMCell(num_units)

def loop_fn(time, cell_output, cell_state, loop_state): emit_output = cell_output # == None for time == 0 if cell_output is None: # time == 0 next_cell_state = cell.zero_state(batch_size, tf.float32) else: next_cell_state = cell_state elements_finished = (time >= sequence_length) finished = tf.reduce_all(elements_finished) next_input = tf.cond( finished, lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32), lambda: inputs_ta.read(time)) next_loop_state = None return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)

outputs_ta, final_state, _ = raw_rnn(cell, loop_fn) outputs = outputs_ta.pack() ```

Args: cell: An instance of RNNCell. loop_fn: A callable that takes inputs (time, cell_output, cell_state, loop_state) and returns the tuple (finished, next_input, next_cell_state, emit_output, next_loop_state). Here time is an int32 scalar Tensor, cell_output is a Tensor or (possibly nested) tuple of tensors as determined by cell.output_size, and cell_state is a Tensor or (possibly nested) tuple of tensors, as determined by the loop_fn on its first call (and should match cell.state_size). The outputs are: finished, a boolean Tensor of shape [batch_size], next_input: the next input to feed to cell, next_cell_state: the next state to feed to cell, and emit_output: the output to store for this iteration.

Note that `emit_output` should be a `Tensor` or (possibly nested)
tuple of tensors with shapes and structure matching `cell.output_size`
and `cell_output` above.  The parameter `cell_state` and output
`next_cell_state` may be either a single or (possibly nested) tuple
of tensors.  The parameter `loop_state` and
output `next_loop_state` may be either a single or (possibly nested) tuple
of `Tensor` and `TensorArray` objects.  This last parameter
may be ignored by `loop_fn` and the return value may be `None`.  If it
is not `None`, then the `loop_state` will be propagated through the RNN
loop, for use purely by `loop_fn` to keep track of its own state.
The `next_loop_state` parameter returned may be `None`.

The first call to `loop_fn` will be `time = 0`, `cell_output = None`,
`cell_state = None`, and `loop_state = None`.  For this call:
The `next_cell_state` value should be the value with which to initialize
the cell's state.  It may be a final state from a previous RNN or it
may be the output of `cell.zero_state()`.  It should be a
(possibly nested) tuple structure of tensors.
If `cell.state_size` is an integer, this must be
a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of
appropriate type and shape `[batch_size] + cell.state_size`.
If `cell.state_size` is a (possibly nested) tuple of ints or
`TensorShape`, this will be a tuple having the corresponding shapes.
The `emit_output` value may be  either `None` or a (possibly nested)
tuple structure of tensors, e.g.,
`(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`.
If this first `emit_output` return value is `None`,
then the `emit_ta` result of `raw_rnn` will have the same structure and
dtypes as `cell.output_size`.  Otherwise `emit_ta` will have the same
structure, shapes (prepended with a `batch_size` dimension), and dtypes
as `emit_output`.  The actual values returned for `emit_output` at this
initializing call are ignored.  Note, this emit structure must be
consistent across all time steps.

parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. scope: VariableScope for the created subgraph; defaults to "RNN".

Returns: A tuple (emit_ta, final_state, final_loop_state) where:

emit_ta: The RNN output TensorArray. If loop_fn returns a (possibly nested) set of Tensors for emit_output during initialization, (inputs time = 0, cell_output = None, and loop_state = None), then emit_ta will have the same structure, dtypes, and shapes as emit_output instead. If loop_fn returns emit_output = None during this call, the structure of cell.output_size is used: If cell.output_size is a (possibly nested) tuple of integers or TensorShape objects, then emit_ta will be a tuple having the same structure as cell.output_size, containing TensorArrays whose elements' shapes correspond to the shape data in cell.output_size.

final_state: The final cell state. If cell.state_size is an int, this will be shaped [batch_size, cell.state_size]. If it is a TensorShape, this will be shaped [batch_size] + cell.state_size. If it is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes.

final_loop_state: The final loop state as returned by loop_fn.

Raises: TypeError: If cell is not an instance of RNNCell, or loop_fn is not a callable.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def raw_rnn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.raw_rnn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.raw_rnn_layer

Return

Applicative

Origial documentation for Builder.raw_rnn_layer

def raw_rnn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.raw_rnn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.raw_rnn

def raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None):

Creates an RNN specified by RNNCell cell and loop function loop_fn.

NOTE: This method is still in testing, and the API may change.

This function is a more primitive version of dynamic_rnn that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output.

For example, it can be used to implement the dynamic decoder of a seq2seq model.

Instead of working with Tensor objects, most operations work with TensorArray objects directly.

The operation of raw_rnn, in pseudo-code, is basically the following:

time = tf.constant(0, dtype=tf.int32) (finished, next_input, initial_state, _, loop_state) = loop_fn( time=time, cell_output=None, cell_state=None, loop_state=None) emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype) state = initial_state while not all(finished): (output, cell_state) = cell(next_input, state) (next_finished, next_input, next_state, emit, loop_state) = loop_fn( time=time + 1, cell_output=output, cell_state=cell_state, loop_state=loop_state) # Emit zeros and copy forward state for minibatch entries that are finished. state = tf.select(finished, state, next_state) emit = tf.select(finished, tf.zeros_like(emit), emit) emit_ta = emit_ta.write(time, emit) # If any new minibatch entries are marked as finished, mark these finished = tf.logical_or(finished, next_finished) time += 1 return (emit_ta, state, loop_state)

with the additional properties that output and state may be (possibly nested) tuples, as determined by cell.output_size and cell.state_size, and as a result the final state and emit_ta may themselves be tuples.

A simple implementation of dynamic_rnn via raw_rnn looks like this:

```python inputs = tf.placeholder(shape=(max_time, batch_size, input_depth), dtype=tf.float32) sequence_length = tf.placeholder(shape=(batch_size,), dtype=tf.int32) inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time) inputs_ta = inputs_ta.unpack(inputs)

cell = tf.nn.rnn_cell.LSTMCell(num_units)

def loop_fn(time, cell_output, cell_state, loop_state): emit_output = cell_output # == None for time == 0 if cell_output is None: # time == 0 next_cell_state = cell.zero_state(batch_size, tf.float32) else: next_cell_state = cell_state elements_finished = (time >= sequence_length) finished = tf.reduce_all(elements_finished) next_input = tf.cond( finished, lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32), lambda: inputs_ta.read(time)) next_loop_state = None return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)

outputs_ta, final_state, _ = raw_rnn(cell, loop_fn) outputs = outputs_ta.pack() ```

Args: cell: An instance of RNNCell. loop_fn: A callable that takes inputs (time, cell_output, cell_state, loop_state) and returns the tuple (finished, next_input, next_cell_state, emit_output, next_loop_state). Here time is an int32 scalar Tensor, cell_output is a Tensor or (possibly nested) tuple of tensors as determined by cell.output_size, and cell_state is a Tensor or (possibly nested) tuple of tensors, as determined by the loop_fn on its first call (and should match cell.state_size). The outputs are: finished, a boolean Tensor of shape [batch_size], next_input: the next input to feed to cell, next_cell_state: the next state to feed to cell, and emit_output: the output to store for this iteration.

Note that `emit_output` should be a `Tensor` or (possibly nested)
tuple of tensors with shapes and structure matching `cell.output_size`
and `cell_output` above.  The parameter `cell_state` and output
`next_cell_state` may be either a single or (possibly nested) tuple
of tensors.  The parameter `loop_state` and
output `next_loop_state` may be either a single or (possibly nested) tuple
of `Tensor` and `TensorArray` objects.  This last parameter
may be ignored by `loop_fn` and the return value may be `None`.  If it
is not `None`, then the `loop_state` will be propagated through the RNN
loop, for use purely by `loop_fn` to keep track of its own state.
The `next_loop_state` parameter returned may be `None`.

The first call to `loop_fn` will be `time = 0`, `cell_output = None`,
`cell_state = None`, and `loop_state = None`.  For this call:
The `next_cell_state` value should be the value with which to initialize
the cell's state.  It may be a final state from a previous RNN or it
may be the output of `cell.zero_state()`.  It should be a
(possibly nested) tuple structure of tensors.
If `cell.state_size` is an integer, this must be
a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of
appropriate type and shape `[batch_size] + cell.state_size`.
If `cell.state_size` is a (possibly nested) tuple of ints or
`TensorShape`, this will be a tuple having the corresponding shapes.
The `emit_output` value may be  either `None` or a (possibly nested)
tuple structure of tensors, e.g.,
`(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`.
If this first `emit_output` return value is `None`,
then the `emit_ta` result of `raw_rnn` will have the same structure and
dtypes as `cell.output_size`.  Otherwise `emit_ta` will have the same
structure, shapes (prepended with a `batch_size` dimension), and dtypes
as `emit_output`.  The actual values returned for `emit_output` at this
initializing call are ignored.  Note, this emit structure must be
consistent across all time steps.

parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. scope: VariableScope for the created subgraph; defaults to "RNN".

Returns: A tuple (emit_ta, final_state, final_loop_state) where:

emit_ta: The RNN output TensorArray. If loop_fn returns a (possibly nested) set of Tensors for emit_output during initialization, (inputs time = 0, cell_output = None, and loop_state = None), then emit_ta will have the same structure, dtypes, and shapes as emit_output instead. If loop_fn returns emit_output = None during this call, the structure of cell.output_size is used: If cell.output_size is a (possibly nested) tuple of integers or TensorShape objects, then emit_ta will be a tuple having the same structure as cell.output_size, containing TensorArrays whose elements' shapes correspond to the shape data in cell.output_size.

final_state: The final cell state. If cell.state_size is an int, this will be shaped [batch_size, cell.state_size]. If it is a TensorShape, this will be shaped [batch_size] + cell.state_size. If it is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes.

final_loop_state: The final loop state as returned by loop_fn.

Raises: TypeError: If cell is not an instance of RNNCell, or loop_fn is not a callable.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def read_file(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.read_file, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.read_file

Return

Applicative

Origial documentation for Builder.read_file

def read_file(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.read_file to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.read_file

def read_file(filename, name=None)

Reads and outputs the entire contents of the input filename.

Args: filename: A Tensor of type string. name: A name for the operation (optional).

Returns: A Tensor of type string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def read_file_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.read_file_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.read_file_layer

Return

Applicative

Origial documentation for Builder.read_file_layer

def read_file_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.read_file, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.read_file

def read_file(filename, name=None):

Reads and outputs the entire contents of the input filename.

Args: filename: A Tensor of type string. name: A name for the operation (optional).

Returns: A Tensor of type string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def real(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.real, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.real

Return

Applicative

Origial documentation for Builder.real

def real(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.real to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.real

def real(input, name=None)

Returns the real part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float32 or float64 that is the real part of each element in input. All elements in input must be complex numbers of the form (a + bj), where a is the real part returned by this operation and b is the imaginary part.

For example:

```

tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]

tf.real(input) ==> [-2.25, 3.25] ```

If input is already real, it is returned unchanged.

Args: input: A Tensor. Must have numeric type. name: A name for the operation (optional).

Returns: A Tensor of type float32 or float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def real_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.real_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.real_layer

Return

Applicative

Origial documentation for Builder.real_layer

def real_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.real, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.real

def real(input, name=None):

Returns the real part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float32 or float64 that is the real part of each element in input. All elements in input must be complex numbers of the form (a + bj), where a is the real part returned by this operation and b is the imaginary part.

For example:

```

tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]

tf.real(input) ==> [-2.25, 3.25] ```

If input is already real, it is returned unchanged.

Args: input: A Tensor. Must have numeric type. name: A name for the operation (optional).

Returns: A Tensor of type float32 or float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.reduce, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.reduce

Return

Applicative

Origial documentation for BuilderTree.reduce

def reduce(tree, fn, initializer=None):

@immutable

Expects a function fn with type (Tensor, Tensor) -> Tensor and optionally an initializer and applies python reduce function to tensorbuilder.core.builders.BuilderTree.tensors using these arguments; the resulting Tensor is the wrapped inside a Builder.

Parameters

  • fn: a function of type (Tensor, Tensor) -> Tensor.
  • initializer: an optional Tensor as initial element of the folding operation (default: None)s

Return

  • tensorbuilder.core.builders.Builder

Example

Lets reduce the example on tensorbuilder.core.builders.Builder.branch this time doing the reduction ourselves instead of relying on the *_layer of tensorbuilder.core.builders.BuilderTree that do this for us

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = (
    tb.build(x)
    .branch(lambda x: [
        x.relu_layer(20)
        .linear_layer(5)
    ,
        x.sigmoid_layer(20)
        .linear_layer(5)
    ,
        x.tanh_layer(20)
        .linear_layer(5)
    ])
    .reduce(tf.add)
    .softmax()
    .tensor()
)

Same example using the DSL

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = tb.pipe(
    x,
    [
        tb.relu_layer(20)
        .linear_layer(5)
    ,
        tb.sigmoid_layer(20)
        .linear_layer(5)
    ,
        tb.tanh_layer(20)
        .linear_layer(5)
    ],
    tb.reduce(tf.add)
    .softmax()
    .tensor()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_all(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_all, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_all

Return

Applicative

Origial documentation for Builder.reduce_all

def reduce_all(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_all to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_all

def reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the "logical and" of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[True, True]

[False, False]]

tf.reduce_all(x) ==> False tf.reduce_all(x, 0) ==> [False, False] tf.reduce_all(x, 1) ==> [True, False] ```

Args: input_tensor: The boolean tensor to reduce. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_all_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_all_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_all_layer

Return

Applicative

Origial documentation for Builder.reduce_all_layer

def reduce_all_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_all, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_all

def reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the "logical and" of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[True, True]

[False, False]]

tf.reduce_all(x) ==> False tf.reduce_all(x, 0) ==> [False, False] tf.reduce_all(x, 1) ==> [True, False] ```

Args: input_tensor: The boolean tensor to reduce. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_any(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_any, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_any

Return

Applicative

Origial documentation for Builder.reduce_any

def reduce_any(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_any to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_any

def reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the "logical or" of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[True, True]

[False, False]]

tf.reduce_any(x) ==> True tf.reduce_any(x, 0) ==> [True, True] tf.reduce_any(x, 1) ==> [True, False] ```

Args: input_tensor: The boolean tensor to reduce. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_any_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_any_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_any_layer

Return

Applicative

Origial documentation for Builder.reduce_any_layer

def reduce_any_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_any, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_any

def reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the "logical or" of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[True, True]

[False, False]]

tf.reduce_any(x) ==> True tf.reduce_any(x, 0) ==> [True, True] tf.reduce_any(x, 1) ==> [True, False] ```

Args: input_tensor: The boolean tensor to reduce. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_join(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_join, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_join

Return

Applicative

Origial documentation for Builder.reduce_join

def reduce_join(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_join to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_join

def reduce_join(inputs, reduction_indices, keep_dims=None, separator=None, name=None)

Joins a string Tensor across the given dimensions.

Computes the string join across dimensions in the given string Tensor of shape [d_0, d_1, ..., d_n-1]. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with -1 being equivalent to n - 1. Passing an empty reduction_indices joins all strings in linear index order and outputs a scalar string.

For example:

```

tensor a is [["a", "b"], ["c", "d"]]

tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] tf.reduce_join(a, [0, 1]) ==> ["acbd"] tf.reduce_join(a, [1, 0]) ==> ["abcd"] tf.reduce_join(a, []) ==> ["abcd"] ```

Args: inputs: A Tensor of type string. The input to be joined. All reduced indices must have non-zero size. reduction_indices: A Tensor of type int32. The dimensions to reduce over. Dimensions are reduced in the order specified. Omitting reduction_indices is equivalent to passing [n-1, n-2, ..., 0]. Negative indices from -n to -1 are supported. keep_dims: An optional bool. Defaults to False. If True, retain reduced dimensions with length 1. separator: An optional string. Defaults to "". The separator to use when joining. name: A name for the operation (optional).

Returns: A Tensor of type string. Has shape equal to that of the input with reduced dimensions removed or set to 1 depending on keep_dims.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_join_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_join_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_join_layer

Return

Applicative

Origial documentation for Builder.reduce_join_layer

def reduce_join_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_join, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_join

def reduce_join(inputs, reduction_indices, keep_dims=None, separator=None, name=None):

Joins a string Tensor across the given dimensions.

Computes the string join across dimensions in the given string Tensor of shape [d_0, d_1, ..., d_n-1]. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with -1 being equivalent to n - 1. Passing an empty reduction_indices joins all strings in linear index order and outputs a scalar string.

For example:

```

tensor a is [["a", "b"], ["c", "d"]]

tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] tf.reduce_join(a, [0, 1]) ==> ["acbd"] tf.reduce_join(a, [1, 0]) ==> ["abcd"] tf.reduce_join(a, []) ==> ["abcd"] ```

Args: inputs: A Tensor of type string. The input to be joined. All reduced indices must have non-zero size. reduction_indices: A Tensor of type int32. The dimensions to reduce over. Dimensions are reduced in the order specified. Omitting reduction_indices is equivalent to passing [n-1, n-2, ..., 0]. Negative indices from -n to -1 are supported. keep_dims: An optional bool. Defaults to False. If True, retain reduced dimensions with length 1. separator: An optional string. Defaults to "". The separator to use when joining. name: A name for the operation (optional).

Returns: A Tensor of type string. Has shape equal to that of the input with reduced dimensions removed or set to 1 depending on keep_dims.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_logsumexp(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_logsumexp, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_logsumexp

Return

Applicative

Origial documentation for Builder.reduce_logsumexp

def reduce_logsumexp(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_logsumexp to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_logsumexp

def reduce_logsumexp(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes log(sum(exp(elements across dimensions of a tensor))).

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

This funciton is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs.

For example:

```python

'x' is [[0, 0, 0]]

[0, 0, 0]]

tf.reduce_logsumexp(x) ==> log(6) tf.reduce_logsumexp(x, 0) ==> [log(2), log(2), log(2)] tf.reduce_logsumexp(x, 1) ==> [log(3), log(3)] tf.reduce_logsumexp(x, 1, keep_dims=True) ==> [[log(3)], [log(3)]] tf.reduce_logsumexp(x, [0, 1]) ==> log(6) ```

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_logsumexp_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_logsumexp_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_logsumexp_layer

Return

Applicative

Origial documentation for Builder.reduce_logsumexp_layer

def reduce_logsumexp_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_logsumexp, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_logsumexp

def reduce_logsumexp(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes log(sum(exp(elements across dimensions of a tensor))).

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

This funciton is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs.

For example:

```python

'x' is [[0, 0, 0]]

[0, 0, 0]]

tf.reduce_logsumexp(x) ==> log(6) tf.reduce_logsumexp(x, 0) ==> [log(2), log(2), log(2)] tf.reduce_logsumexp(x, 1) ==> [log(3), log(3)] tf.reduce_logsumexp(x, 1, keep_dims=True) ==> [[log(3)], [log(3)]] tf.reduce_logsumexp(x, [0, 1]) ==> log(6) ```

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_max(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_max, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_max

Return

Applicative

Origial documentation for Builder.reduce_max

def reduce_max(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_max to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_max

def reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the maximum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_max_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_max_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_max_layer

Return

Applicative

Origial documentation for Builder.reduce_max_layer

def reduce_max_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_max, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_max

def reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the maximum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_mean(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_mean, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_mean

Return

Applicative

Origial documentation for Builder.reduce_mean

def reduce_mean(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_mean to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_mean

def reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the mean of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[1., 1.]

[2., 2.]]

tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] ```

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_mean_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_mean_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_mean_layer

Return

Applicative

Origial documentation for Builder.reduce_mean_layer

def reduce_mean_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_mean, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_mean

def reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the mean of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[1., 1.]

[2., 2.]]

tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] ```

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_min(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_min, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_min

Return

Applicative

Origial documentation for Builder.reduce_min

def reduce_min(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_min to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_min

def reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the minimum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_min_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_min_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_min_layer

Return

Applicative

Origial documentation for Builder.reduce_min_layer

def reduce_min_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_min, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_min

def reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the minimum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_prod(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_prod, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_prod

Return

Applicative

Origial documentation for Builder.reduce_prod

def reduce_prod(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_prod to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_prod

def reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the product of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_prod_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_prod_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_prod_layer

Return

Applicative

Origial documentation for Builder.reduce_prod_layer

def reduce_prod_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_prod, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_prod

def reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the product of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_sum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_sum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_sum

Return

Applicative

Origial documentation for Builder.reduce_sum

def reduce_sum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reduce_sum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reduce_sum

def reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None)

Computes the sum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[1, 1, 1]

[1, 1, 1]]

tf.reduce_sum(x) ==> 6 tf.reduce_sum(x, 0) ==> [2, 2, 2] tf.reduce_sum(x, 1) ==> [3, 3] tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]] tf.reduce_sum(x, [0, 1]) ==> 6 ```

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reduce_sum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reduce_sum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reduce_sum_layer

Return

Applicative

Origial documentation for Builder.reduce_sum_layer

def reduce_sum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reduce_sum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reduce_sum

def reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None):

Computes the sum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

```python

'x' is [[1, 1, 1]

[1, 1, 1]]

tf.reduce_sum(x) ==> 6 tf.reduce_sum(x, 0) ==> [2, 2, 2] tf.reduce_sum(x, 1) ==> [3, 3] tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]] tf.reduce_sum(x, [0, 1]) ==> 6 ```

Args: input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions. keep_dims: If true, retains reduced dimensions with length 1. name: A name for the operation (optional).

Returns: The reduced tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def register_map_method(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.register_map_method, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.register_map_method

Return

Applicative

Origial documentation for Builder.register_map_method

def register_map_method(cls, fn, library_path, alias=None, doc=None):

This method enables you to register any function fn that takes a Tensor as its first argument and returns a Tensor as a method of the Builder class. The resulting method is created by lifting the function to work with a Builder.

Arguments

  • fn: a function of type Tensor -> Tensor.
  • library_path: the route of the librar from which this function was taken, used for documentation purposes.
  • alias: allows you to specify the name of the method, it will take the name of the function if its None.
  • doc: the documentation for the method, if None a predefied documentation will be generated based on the documentation of fn.

Return

None

Examples

In this example we will register tf.reshape as a method of the Builder class

import tensorflow as tf
from tensorbuilder import tb

tb.Builder.register_map_method(tf.reshape, "tf")
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def register_method(

cls, fn, library_path, alias=None, doc=None)

This method enables you to register any function fn that takes an Applicative as its first argument as a method of the Builder class.

Arguments

  • fn: a function that atleast takes an Applicative as its first argument.
  • library_path: the route of the librar from which this function was taken, used for documentation purposes.
  • alias: allows you to specify the name of the method, it will take the name of the function if its None.
  • doc: the documentation for the method, if None a predefied documentation will be generated based on the documentation of fn.

Return

None

Examples

@classmethod
def register_method(cls, fn, library_path, alias=None, doc=None):
    """
    This method enables you to register any function `fn` that takes an Applicative as its first argument as a method of the Builder class.
    **Arguments**
    * `fn`: a function that atleast takes an Applicative as its first argument.
    * `library_path`: the route of the librar from which this function was taken, used for documentation purposes.
    * `alias`: allows you to specify the name of the method, it will take the name of the function if its `None`.
    * `doc`: the documentation for the method, if `None` a predefied documentation will be generated based on the documentation of `fn`.
    **Return**
    `None`
    **Examples**
    """
    fn_signature = utils.get_method_sig(fn)
 	fn_docs = inspect.getdoc(fn)
    original_name = fn.__name__
    name = alias if alias else original_name
    fn.__name__ = name
    fn.__doc__ = doc if doc else """
    THIS METHOD IS AUTOMATICALLY GENERATED
    This method accepts the same arguments as `{3}.{0}`
    ** Documentation from `{3}.{0}`**
        def {1}
    """.format(name, fn_signature, fn.__doc__, library_path)
    setattr(cls, name, fn)

def register_reduce_method(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.register_reduce_method, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.register_reduce_method

Return

Applicative

Origial documentation for BuilderTree.register_reduce_method

def register_reduce_method(cls, fn, library_path, alias=None, doc=None):

This method enables you to register a function fn of type (Tensor, Tensor) -> Tensor as a method of the Builder class.

Arguments

  • fn: a function of type (Tensor, Tensor) -> Tensor
  • library_path: the route of the librar from which this function was taken, used for documentation purposes.
  • alias: allows you to specify the name of the method, it will take the name of the function if its None.
  • doc: the documentation for the method, if None a predefied documentation will be generated based on the documentation of fn.

Return

None

Examples

In this example we will create the method reduce_add for the BuilderTree class

import tensorflow as tf
from tensorbuilder import tb

tb.BuilderTree.register_reduce_method(tf.add, "tf", alias="reduce_add")
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def register_tensor_conversion_function(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.register_tensor_conversion_function, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.register_tensor_conversion_function

Return

Applicative

Origial documentation for Builder.register_tensor_conversion_function

def register_tensor_conversion_function(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.register_tensor_conversion_function to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.register_tensor_conversion_function

def register_tensor_conversion_function(base_type, conversion_func, priority=100)

Registers a function for converting objects of base_type to Tensor.

The conversion function must have the following signature:

def conversion_func(value, dtype=None, name=None, as_ref=False):
  # ...

It must return a Tensor with the given dtype if specified. If the conversion function creates a new Tensor, it should use the given name if specified. All exceptions will be propagated to the caller.

The conversion function may return NotImplemented for some inputs. In this case, the conversion process will continue to try subsequent conversion functions.

If as_ref is true, the function must return a Tensor reference, such as a Variable.

NOTE: The conversion functions will execute in order of priority, followed by order of registration. To ensure that a conversion function F runs before another conversion function G, ensure that F is registered with a smaller priority than G.

Args: base_type: The base type or tuple of base types for all objects that conversion_func accepts. conversion_func: A function that converts instances of base_type to Tensor. priority: Optional integer that indicates the priority for applying this conversion function. Conversion functions with smaller priority values run earlier than conversion functions with larger priority values. Defaults to 100.

Raises: TypeError: If the arguments do not have the appropriate type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def register_tensor_conversion_function_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.register_tensor_conversion_function_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.register_tensor_conversion_function_layer

Return

Applicative

Origial documentation for Builder.register_tensor_conversion_function_layer

def register_tensor_conversion_function_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.register_tensor_conversion_function, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.register_tensor_conversion_function

def register_tensor_conversion_function(base_type, conversion_func, priority=100):

Registers a function for converting objects of base_type to Tensor.

The conversion function must have the following signature:

def conversion_func(value, dtype=None, name=None, as_ref=False):
  # ...

It must return a Tensor with the given dtype if specified. If the conversion function creates a new Tensor, it should use the given name if specified. All exceptions will be propagated to the caller.

The conversion function may return NotImplemented for some inputs. In this case, the conversion process will continue to try subsequent conversion functions.

If as_ref is true, the function must return a Tensor reference, such as a Variable.

NOTE: The conversion functions will execute in order of priority, followed by order of registration. To ensure that a conversion function F runs before another conversion function G, ensure that F is registered with a smaller priority than G.

Args: base_type: The base type or tuple of base types for all objects that conversion_func accepts. conversion_func: A function that converts instances of base_type to Tensor. priority: Optional integer that indicates the priority for applying this conversion function. Conversion functions with smaller priority values run earlier than conversion functions with larger priority values. Defaults to 100.

Raises: TypeError: If the arguments do not have the appropriate type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def register_tensor_method(

cls, fn, library_path, alias=None, doc=None)

This method enables you to register any function fn that takes an tensor as its first argument as a method of the Builder and Applicative class.

Arguments

  • fn: a function that atleast takes an Tensor as its first argument.
  • library_path: the route of the librar from which this function was taken, used for documentation purposes.
  • alias: allows you to specify the name of the method, it will take the name of the function if its None.
  • doc: the documentation for the method, if None a predefied documentation will be generated based on the documentation of fn.

Return

None

Examples

@classmethod
def register_tensor_method(cls, fn, library_path, alias=None, doc=None):
    """
    This method enables you to register any function `fn` that takes an tensor as its first argument as a method of the Builder and Applicative class.
    **Arguments**
    * `fn`: a function that atleast takes an Tensor as its first argument.
    * `library_path`: the route of the librar from which this function was taken, used for documentation purposes.
    * `alias`: allows you to specify the name of the method, it will take the name of the function if its `None`.
    * `doc`: the documentation for the method, if `None` a predefied documentation will be generated based on the documentation of `fn`.
    **Return**
    `None`
    **Examples**
    """
    original_name = fn.__name__
    name = alias if alias else original_name
    method = get_app_method(name)
    cls.register_method(method, library_path, alias=name, doc=doc)

def relu(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.relu, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.relu

Return

Applicative

Origial documentation for Builder.relu

def relu(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.relu to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.relu

def relu(features, name=None)

Computes rectified linear: max(features, 0).

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def relu6(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.relu6, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.relu6

Return

Applicative

Origial documentation for Builder.relu6

def relu6(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.relu6 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.relu6

def relu6(features, name=None)

Computes Rectified Linear 6: min(max(features, 0), 6).

Args: features: A Tensor with type float, double, int32, int64, uint8, int16, or int8. name: A name for the operation (optional).

Returns: A Tensor with the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def relu6_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.relu6_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.relu6_layer

Return

Applicative

Origial documentation for Builder.relu6_layer

def relu6_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.relu6, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.relu6

def relu6(features, name=None):

Computes Rectified Linear 6: min(max(features, 0), 6).

Args: features: A Tensor with type float, double, int32, int64, uint8, int16, or int8. name: A name for the operation (optional).

Returns: A Tensor with the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def relu_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.relu_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.relu_layer

Return

Applicative

Origial documentation for Builder.relu_layer

def relu_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.relu, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.relu

def relu(features, name=None):

Computes rectified linear: max(features, 0).

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def report_uninitialized_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.report_uninitialized_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.report_uninitialized_variables

Return

Applicative

Origial documentation for Builder.report_uninitialized_variables

def report_uninitialized_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.report_uninitialized_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.report_uninitialized_variables

def report_uninitialized_variables(var_list=None, name="report_uninitialized_variables")

Adds ops to list the names of uninitialized variables.

When run, it returns a 1-D tensor containing the names of uninitialized variables if there are any, or an empty array if there are none.

Args: var_list: List of Variable objects to check. Defaults to the value of all_variables() + local_variables() name: Optional name of the Operation.

Returns: A 1-D tensor containing names of the uninitialized variables, or an empty 1-D tensor if there are no variables or no uninitialized variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def report_uninitialized_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.report_uninitialized_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.report_uninitialized_variables_layer

Return

Applicative

Origial documentation for Builder.report_uninitialized_variables_layer

def report_uninitialized_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.report_uninitialized_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.report_uninitialized_variables

def report_uninitialized_variables(var_list=None, name="report_uninitialized_variables"):

Adds ops to list the names of uninitialized variables.

When run, it returns a 1-D tensor containing the names of uninitialized variables if there are any, or an empty array if there are none.

Args: var_list: List of Variable objects to check. Defaults to the value of all_variables() + local_variables() name: Optional name of the Operation.

Returns: A 1-D tensor containing names of the uninitialized variables, or an empty 1-D tensor if there are no variables or no uninitialized variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def required_space_to_batch_paddings(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.required_space_to_batch_paddings, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.required_space_to_batch_paddings

Return

Applicative

Origial documentation for Builder.required_space_to_batch_paddings

def required_space_to_batch_paddings(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.required_space_to_batch_paddings to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.required_space_to_batch_paddings

def required_space_to_batch_paddings(input_shape, block_shape, base_paddings=None, name=None)

Calculate padding required to make block_shape divide input_shape.

This function can be used to calculate a suitable paddings argument for use with space_to_batch_nd and batch_to_space_nd.

Args: input_shape: int32 Tensor of shape [N]. block_shape: int32 Tensor of shape [N]. base_paddings: Optional int32 Tensor of shape [N, 2]. Specifies the minimum amount of padding to use. All elements must be >= 0. If not specified, defaults to 0. name: string. Optional name prefix.

Returns: (paddings, crops), where:

paddings and crops are int32 Tensors of rank 2 and shape [N, 2] satisfying:

  paddings[i, 0] = base_paddings[i, 0].
  0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i]
  (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0

  crops[i, 0] = 0
  crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]

Raises: ValueError if called with incompatible shapes.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def required_space_to_batch_paddings_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.required_space_to_batch_paddings_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.required_space_to_batch_paddings_layer

Return

Applicative

Origial documentation for Builder.required_space_to_batch_paddings_layer

def required_space_to_batch_paddings_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.required_space_to_batch_paddings, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.required_space_to_batch_paddings

def required_space_to_batch_paddings(input_shape, block_shape, base_paddings=None, name=None):

Calculate padding required to make block_shape divide input_shape.

This function can be used to calculate a suitable paddings argument for use with space_to_batch_nd and batch_to_space_nd.

Args: input_shape: int32 Tensor of shape [N]. block_shape: int32 Tensor of shape [N]. base_paddings: Optional int32 Tensor of shape [N, 2]. Specifies the minimum amount of padding to use. All elements must be >= 0. If not specified, defaults to 0. name: string. Optional name prefix.

Returns: (paddings, crops), where:

paddings and crops are int32 Tensors of rank 2 and shape [N, 2] satisfying:

  paddings[i, 0] = base_paddings[i, 0].
  0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i]
  (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0

  crops[i, 0] = 0
  crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]

Raises: ValueError if called with incompatible shapes.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reset_default_graph(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reset_default_graph, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reset_default_graph

Return

Applicative

Origial documentation for Builder.reset_default_graph

def reset_default_graph(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reset_default_graph to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reset_default_graph

def reset_default_graph()

Clears the default graph stack and resets the global default graph.

NOTE: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.Session or tf.InteractiveSession is active will result in undefined behavior. Using any previously created tf.Operation or tf.Tensor objects after calling this function will result in undefined behavior.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reset_default_graph_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reset_default_graph_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reset_default_graph_layer

Return

Applicative

Origial documentation for Builder.reset_default_graph_layer

def reset_default_graph_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reset_default_graph, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reset_default_graph

def reset_default_graph():

Clears the default graph stack and resets the global default graph.

NOTE: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.Session or tf.InteractiveSession is active will result in undefined behavior. Using any previously created tf.Operation or tf.Tensor objects after calling this function will result in undefined behavior.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reshape(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reshape, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reshape

Return

Applicative

Origial documentation for Builder.reshape

def reshape(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reshape to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reshape

def reshape(tensor, shape, name=None)

Reshapes a tensor.

Given tensor, this operation returns a tensor that has the same values as tensor with shape shape.

If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.

If shape is 1-D or higher, then the operation returns a tensor with shape shape filled with the values of tensor. In this case, the number of elements implied by shape must be the same as the number of elements in tensor.

For example:

```prettyprint

tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]

tensor 't' has shape [9]

reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]]

tensor 't' is [[[1, 1], [2, 2]],

[[3, 3], [4, 4]]]

tensor 't' has shape [2, 2, 2]

reshape(t, [2, 4]) ==> [[1, 1, 2, 2], [3, 3, 4, 4]]

tensor 't' is [[[1, 1, 1],

[2, 2, 2]],

[[3, 3, 3],

[4, 4, 4]],

[[5, 5, 5],

[6, 6, 6]]]

tensor 't' has shape [3, 2, 3]

pass '[-1]' to flatten 't'

reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]

-1 can also be used to infer the shape

-1 is inferred to be 9:

reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]]

-1 is inferred to be 2:

reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]]

-1 is inferred to be 3:

reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [6, 6, 6]]]

tensor 't' is [7]

shape [] reshapes to a scalar

reshape(t, []) ==> 7 ```

Args: tensor: A Tensor. shape: A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reshape_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reshape_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reshape_layer

Return

Applicative

Origial documentation for Builder.reshape_layer

def reshape_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reshape, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reshape

def reshape(tensor, shape, name=None):

Reshapes a tensor.

Given tensor, this operation returns a tensor that has the same values as tensor with shape shape.

If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.

If shape is 1-D or higher, then the operation returns a tensor with shape shape filled with the values of tensor. In this case, the number of elements implied by shape must be the same as the number of elements in tensor.

For example:

```prettyprint

tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]

tensor 't' has shape [9]

reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]]

tensor 't' is [[[1, 1], [2, 2]],

[[3, 3], [4, 4]]]

tensor 't' has shape [2, 2, 2]

reshape(t, [2, 4]) ==> [[1, 1, 2, 2], [3, 3, 4, 4]]

tensor 't' is [[[1, 1, 1],

[2, 2, 2]],

[[3, 3, 3],

[4, 4, 4]],

[[5, 5, 5],

[6, 6, 6]]]

tensor 't' has shape [3, 2, 3]

pass '[-1]' to flatten 't'

reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]

-1 can also be used to infer the shape

-1 is inferred to be 9:

reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]]

-1 is inferred to be 2:

reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]]

-1 is inferred to be 3:

reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [6, 6, 6]]]

tensor 't' is [7]

shape [] reshapes to a scalar

reshape(t, []) ==> 7 ```

Args: tensor: A Tensor. shape: A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reverse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reverse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reverse

Return

Applicative

Origial documentation for Builder.reverse

def reverse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reverse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reverse

def reverse(tensor, dims, name=None)

Reverses specific dimensions of a tensor.

Given a tensor, and a bool tensor dims representing the dimensions of tensor, this operation reverses each dimension i of tensor where dims[i] is True.

tensor can have up to 8 dimensions. The number of dimensions of tensor must equal the number of elements in dims. In other words:

rank(tensor) = size(dims)

For example:

```prettyprint

tensor 't' is [[[[ 0, 1, 2, 3],

[ 4, 5, 6, 7],

[ 8, 9, 10, 11]],

[[12, 13, 14, 15],

[16, 17, 18, 19],

[20, 21, 22, 23]]]]

tensor 't' shape is [1, 2, 3, 4]

'dims' is [False, False, False, True]

reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]]

'dims' is [False, True, False, False]

reverse(t, dims) ==> [[[[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23] [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]]]

'dims' is [False, False, True, False]

reverse(t, dims) ==> [[[[8, 9, 10, 11], [4, 5, 6, 7], [0, 1, 2, 3]] [[20, 21, 22, 23], [16, 17, 18, 19], [12, 13, 14, 15]]]] ```

Args: tensor: A Tensor. Must be one of the following types: uint8, int8, int32, int64, bool, half, float32, float64, complex64, complex128. Up to 8-D. dims: A Tensor of type bool. 1-D. The dimensions to reverse. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor. The same shape as tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reverse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reverse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reverse_layer

Return

Applicative

Origial documentation for Builder.reverse_layer

def reverse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reverse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reverse

def reverse(tensor, dims, name=None):

Reverses specific dimensions of a tensor.

Given a tensor, and a bool tensor dims representing the dimensions of tensor, this operation reverses each dimension i of tensor where dims[i] is True.

tensor can have up to 8 dimensions. The number of dimensions of tensor must equal the number of elements in dims. In other words:

rank(tensor) = size(dims)

For example:

```prettyprint

tensor 't' is [[[[ 0, 1, 2, 3],

[ 4, 5, 6, 7],

[ 8, 9, 10, 11]],

[[12, 13, 14, 15],

[16, 17, 18, 19],

[20, 21, 22, 23]]]]

tensor 't' shape is [1, 2, 3, 4]

'dims' is [False, False, False, True]

reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]]

'dims' is [False, True, False, False]

reverse(t, dims) ==> [[[[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23] [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]]]

'dims' is [False, False, True, False]

reverse(t, dims) ==> [[[[8, 9, 10, 11], [4, 5, 6, 7], [0, 1, 2, 3]] [[20, 21, 22, 23], [16, 17, 18, 19], [12, 13, 14, 15]]]] ```

Args: tensor: A Tensor. Must be one of the following types: uint8, int8, int32, int64, bool, half, float32, float64, complex64, complex128. Up to 8-D. dims: A Tensor of type bool. 1-D. The dimensions to reverse. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as tensor. The same shape as tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reverse_sequence(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reverse_sequence, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reverse_sequence

Return

Applicative

Origial documentation for Builder.reverse_sequence

def reverse_sequence(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.reverse_sequence to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.reverse_sequence

def reverse_sequence(input, seq_lengths, seq_dim, batch_dim=None, name=None)

Reverses variable length slices.

This op first slices input along the dimension batch_dim, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_dim.

The elements of seq_lengths must obey seq_lengths[i] < input.dims[seq_dim], and seq_lengths must be a vector of length input.dims[batch_dim].

The output slice i along dimension batch_dim is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_dim reversed.

For example:

```prettyprint

Given this:

batch_dim = 0 seq_dim = 1 input.dims = (4, 8, ...) seq_lengths = [7, 2, 3, 5]

then slices of input are reversed on seq_dim, but only up to seq_lengths:

output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]

while entries past seq_lens are copied through:

output[0, 7:, :, ...] = input[0, 7:, :, ...] output[1, 2:, :, ...] = input[1, 2:, :, ...] output[2, 3:, :, ...] = input[2, 3:, :, ...] output[3, 2:, :, ...] = input[3, 2:, :, ...] ```

In contrast, if:

```prettyprint

Given this:

batch_dim = 2 seq_dim = 0 input.dims = (8, ?, 4, ...) seq_lengths = [7, 2, 3, 5]

then slices of input are reversed on seq_dim, but only up to seq_lengths:

output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]

while entries past seq_lens are copied through:

output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] ```

Args: input: A Tensor. The input to reverse. seq_lengths: A Tensor. Must be one of the following types: int32, int64. 1-D with length input.dims(batch_dim) and max(seq_lengths) < input.dims(seq_dim) seq_dim: An int. The dimension which is partially reversed. batch_dim: An optional int. Defaults to 0. The dimension along which reversal is performed. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The partially reversed input. It has the same shape as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def reverse_sequence_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.reverse_sequence_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.reverse_sequence_layer

Return

Applicative

Origial documentation for Builder.reverse_sequence_layer

def reverse_sequence_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.reverse_sequence, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.reverse_sequence

def reverse_sequence(input, seq_lengths, seq_dim, batch_dim=None, name=None):

Reverses variable length slices.

This op first slices input along the dimension batch_dim, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_dim.

The elements of seq_lengths must obey seq_lengths[i] < input.dims[seq_dim], and seq_lengths must be a vector of length input.dims[batch_dim].

The output slice i along dimension batch_dim is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_dim reversed.

For example:

```prettyprint

Given this:

batch_dim = 0 seq_dim = 1 input.dims = (4, 8, ...) seq_lengths = [7, 2, 3, 5]

then slices of input are reversed on seq_dim, but only up to seq_lengths:

output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]

while entries past seq_lens are copied through:

output[0, 7:, :, ...] = input[0, 7:, :, ...] output[1, 2:, :, ...] = input[1, 2:, :, ...] output[2, 3:, :, ...] = input[2, 3:, :, ...] output[3, 2:, :, ...] = input[3, 2:, :, ...] ```

In contrast, if:

```prettyprint

Given this:

batch_dim = 2 seq_dim = 0 input.dims = (8, ?, 4, ...) seq_lengths = [7, 2, 3, 5]

then slices of input are reversed on seq_dim, but only up to seq_lengths:

output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]

while entries past seq_lens are copied through:

output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] ```

Args: input: A Tensor. The input to reverse. seq_lengths: A Tensor. Must be one of the following types: int32, int64. 1-D with length input.dims(batch_dim) and max(seq_lengths) < input.dims(seq_dim) seq_dim: An int. The dimension which is partially reversed. batch_dim: An optional int. Defaults to 0. The dimension along which reversal is performed. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. The partially reversed input. It has the same shape as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rnn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.rnn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.rnn

Return

Applicative

Origial documentation for Builder.rnn

def rnn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.rnn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.rnn

def rnn(inputs, cell)

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rnn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.rnn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.rnn_layer

Return

Applicative

Origial documentation for Builder.rnn_layer

def rnn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.rnn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.rnn

def rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None):

Creates a recurrent neural network specified by RNNCell cell.

The simplest form of RNN network generated is: python state = cell.zero_state(...) outputs = [] for input_ in inputs: output, state = cell(input_, state) outputs.append(output) return (outputs, state) However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time t for batch row b, python (output, state)(b, t) = (t >= sequence_length(b)) ? (zeros(cell.output_size), states(b, sequence_length(b) - 1)) : cell(input(b, t), state(b, t - 1))

Args: cell: An instance of RNNCell. inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size], or a nested tuple of such elements. initial_state: (optional) An initial state for the RNN. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell.state_size. dtype: (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. sequence_length: Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size [batch_size], values in [0, T). scope: VariableScope for the created subgraph; defaults to "RNN".

Returns: A pair (outputs, state) where: - outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state

Raises: TypeError: If cell is not an instance of RNNCell. ValueError: If inputs is None or an empty list, or if the input depth (column size) cannot be inferred from inputs via shape inference.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rnn_placeholders_from_state(

applicative, zero_state, name='rnn_state')

THIS METHOD IS AUTOMATICALLY GENERATED

This method accepts the same arguments as tensorbuilder.Applicative.rnn_placeholders_from_state

Documentation from tensorbuilder.Applicative.rnn_placeholders_from_state

def rnn_placeholders_from_state(applicative, zero_state, name="rnn_state")
def rnn_placeholders_from_state(applicative, zero_state, name="rnn_state"):
    if isinstance(zero_state, tuple):
        return tuple([applicative.rnn_placeholders_from_state(substate, name=name) for substate in zero_state])
    else:
        return tf.placeholder(zero_state.dtype, shape=zero_state.get_shape(), name=name)

def rnn_state_feed_dict(

applicative, placeholders, values)

THIS METHOD IS AUTOMATICALLY GENERATED

This method accepts the same arguments as tensorbuilder.Applicative.rnn_state_feed_dict

Documentation from tensorbuilder.Applicative.rnn_state_feed_dict

def rnn_state_feed_dict(applicative, placeholders, values)
def rnn_state_feed_dict(applicative, placeholders, values):
    return dict(zip(utils.flatten(placeholders), utils.flatten(values)))

def round(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.round, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.round

Return

Applicative

Origial documentation for Builder.round

def round(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.round to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.round

def round(x, name=None)

Rounds the values of a tensor to the nearest integer, element-wise.

For example:

```python

'a' is [0.9, 2.5, 2.3, -4.4]

tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ] ```

Args: x: A Tensor of type float32 or float64. name: A name for the operation (optional).

Returns: A Tensor of same shape and type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def round_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.round_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.round_layer

Return

Applicative

Origial documentation for Builder.round_layer

def round_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.round, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.round

def round(x, name=None):

Rounds the values of a tensor to the nearest integer, element-wise.

For example:

```python

'a' is [0.9, 2.5, 2.3, -4.4]

tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ] ```

Args: x: A Tensor of type float32 or float64. name: A name for the operation (optional).

Returns: A Tensor of same shape and type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rsqrt(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.rsqrt, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.rsqrt

Return

Applicative

Origial documentation for Builder.rsqrt

def rsqrt(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.rsqrt to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.rsqrt

def rsqrt(x, name=None)

Computes reciprocal of square root of x element-wise.

I.e., \(y = 1 / \sqrt{x}\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def rsqrt_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.rsqrt_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.rsqrt_layer

Return

Applicative

Origial documentation for Builder.rsqrt_layer

def rsqrt_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.rsqrt, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.rsqrt

def rsqrt(x, name=None):

Computes reciprocal of square root of x element-wise.

I.e., \(y = 1 / \sqrt{x}\).

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sampled_softmax_loss(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sampled_softmax_loss, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sampled_softmax_loss

Return

Applicative

Origial documentation for Builder.sampled_softmax_loss

def sampled_softmax_loss(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tensorbuilder.sampled_softmax_loss to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tensorbuilder.sampled_softmax_loss

def sampled_softmax_loss()

Adds a fully connected layer. fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a Tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the hidden units. Finally, if activation_fn is not None, it is applied to the hidden units as well. Note: that if inputs have a rank greater than 2, then inputs is flattened prior to the initial matrix multiply by weights. Args: inputs: A tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, depth], [None, None, None, channels]. num_outputs: Integer, the number of output units in the layer. activation_fn: activation function. normalizer_fn: normalization function to use instead of biases. If normalize_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. normalizer_params: normalization function parameters. weights_initializer: An initializer for the weights. weights_regularizer: Optional regularizer for the weights. biases_initializer: An initializer for the biases. If None skip biases. biases_regularizer: Optional regularizer for the biases. reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. variables_collections: Optional list of collections for all the variables or a dictionary containing a different list of collections per variable. outputs_collections: collection to add the outputs. trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). scope: Optional scope for variable_op_scope. Returns: the tensor variable representing the result of the series of operations. Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def saturate_cast(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.saturate_cast, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.saturate_cast

Return

Applicative

Origial documentation for Builder.saturate_cast

def saturate_cast(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.saturate_cast to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.saturate_cast

def saturate_cast(value, dtype, name=None)

Performs a safe saturating cast of value to dtype.

This function casts the input to dtype without applying any scaling. If there is a danger that values would over or underflow in the cast, this op applies the appropriate clamping before the cast.

Args: value: A Tensor. dtype: The desired output DType. name: A name for the operation (optional).

Returns: value safely cast to dtype.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def saturate_cast_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.saturate_cast_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.saturate_cast_layer

Return

Applicative

Origial documentation for Builder.saturate_cast_layer

def saturate_cast_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.saturate_cast, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.saturate_cast

def saturate_cast(value, dtype, name=None):

Performs a safe saturating cast of value to dtype.

This function casts the input to dtype without applying any scaling. If there is a danger that values would over or underflow in the cast, this op applies the appropriate clamping before the cast.

Args: value: A Tensor. dtype: The desired output DType. name: A name for the operation (optional).

Returns: value safely cast to dtype.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scalar_mul(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scalar_mul, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scalar_mul

Return

Applicative

Origial documentation for Builder.scalar_mul

def scalar_mul(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scalar_mul to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scalar_mul

def scalar_mul(scalar, x)

Multiplies a scalar times a Tensor or IndexedSlices object.

Intended for use in gradient code which might deal with IndexedSlices objects, which are easy to multiply by a scalar but more expensive to multiply with arbitrary tensors.

Args: scalar: A 0-D scalar Tensor. Must have known shape. x: A Tensor or IndexedSlices to be scaled.

Returns: scalar * x of the same type (Tensor or IndexedSlices) as x.

Raises: ValueError: if scalar is not a 0-D scalar.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scalar_mul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scalar_mul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scalar_mul_layer

Return

Applicative

Origial documentation for Builder.scalar_mul_layer

def scalar_mul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scalar_mul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scalar_mul

def scalar_mul(scalar, x):

Multiplies a scalar times a Tensor or IndexedSlices object.

Intended for use in gradient code which might deal with IndexedSlices objects, which are easy to multiply by a scalar but more expensive to multiply with arbitrary tensors.

Args: scalar: A 0-D scalar Tensor. Must have known shape. x: A Tensor or IndexedSlices to be scaled.

Returns: scalar * x of the same type (Tensor or IndexedSlices) as x.

Raises: ValueError: if scalar is not a 0-D scalar.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scalar_summary(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scalar_summary, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scalar_summary

Return

Applicative

Origial documentation for Builder.scalar_summary

def scalar_summary(builder, tag):

THIS METHOD IS AUTOMATICALLY GENERATED

Same as tf.scalar_summary(tags, values, collections=None, name=None) but the the with the summery tensor as its first parameter.

Return

Builder

Origial documentation for tf.scalar_summary

def scalar_summary(tags, values, collections=None, name=None):

Outputs a Summary protocol buffer with scalar values.

The input tags and values must have the same shape. The generated summary has a summary value for each tag-value pair in tags and values.

Args: tags: A string Tensor. Tags for the summaries. values: A real numeric Tensor. Values for the summaries. collections: Optional list of graph collections keys. The new summary op is added to these collections. Defaults to [GraphKeys.SUMMARIES]. name: A name for the operation (optional).

Returns: A scalar Tensor of type string. The serialized Summary protocol buffer.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scan(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scan, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scan

Return

Applicative

Origial documentation for Builder.scan

def scan(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scan to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scan

def scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)

scan on the list of tensors unpacked from elems on dimension 0.

The simplest version of scan repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [len(values)] + fn(initializer, values[0]).shape.

This method also allows multi-arity elems and accumulator. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The second argument of fn must match the structure of elems.

If no initializer is provided, the output structure and dtypes of fn are assumed to be the same as its input; and in this case, the first argument of fn must match the structure of elems.

If an initializer is provided, then the output of fn must have the same structure as initializer; and the first argument of fn must match this structure.

For example, if elems is (t1, [t2, t3]) and initializer is [i1, i2] then an appropriate signature for fn in python2 is: fn = lambda (acc_p1, acc_p2), (t1 [t2, t3]): and fn must return a list, [acc_n1, acc_n2]. An alternative correct signature for fn, and the one that works in python3, is: fn = lambda a, t:, where a and t correspond to the input tuples.

Args: fn: The callable to be performed. It accepts two arguments. The first will have the same (possibly nested) structure as elems. The second will have the same structure as initializer if one is provided, otherwise it will have the same structure as elems. Its output must have the same structure as initializer if one is provided, otherwise it must have the same structure as elems. elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn. initializer: (optional) A tensor or (possibly nested) sequence of tensors, initial value for the accumulator, and the expected output type of fn. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. infer_shape: (optional) False disables tests for consistent output shapes. name: (optional) Name prefix for the returned tensors.

Returns: A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, and the previous accumulator value(s), from first to last.

Raises: TypeError: if fn is not callable or the structure of the output of fn and initializer do not match. ValueError: if the lengths of the output of fn and initializer do not match.

Examples: python elems = np.array([1, 2, 3, 4, 5, 6]) sum = scan(lambda a, x: a + x, elems) # sum == [1, 3, 6, 10, 15, 21]

python elems = np.array([1, 2, 3, 4, 5, 6]) initializer = np.array(0) sum_one = scan( lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer) # sum_one == [1, 2, 3, 4, 5, 6]

python elems = np.array([1, 0, 0, 0, 0, 0]) initializer = (np.array(0), np.array(1)) fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scan_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scan_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scan_layer

Return

Applicative

Origial documentation for Builder.scan_layer

def scan_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scan, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scan

def scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None):

scan on the list of tensors unpacked from elems on dimension 0.

The simplest version of scan repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [len(values)] + fn(initializer, values[0]).shape.

This method also allows multi-arity elems and accumulator. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The second argument of fn must match the structure of elems.

If no initializer is provided, the output structure and dtypes of fn are assumed to be the same as its input; and in this case, the first argument of fn must match the structure of elems.

If an initializer is provided, then the output of fn must have the same structure as initializer; and the first argument of fn must match this structure.

For example, if elems is (t1, [t2, t3]) and initializer is [i1, i2] then an appropriate signature for fn in python2 is: fn = lambda (acc_p1, acc_p2), (t1 [t2, t3]): and fn must return a list, [acc_n1, acc_n2]. An alternative correct signature for fn, and the one that works in python3, is: fn = lambda a, t:, where a and t correspond to the input tuples.

Args: fn: The callable to be performed. It accepts two arguments. The first will have the same (possibly nested) structure as elems. The second will have the same structure as initializer if one is provided, otherwise it will have the same structure as elems. Its output must have the same structure as initializer if one is provided, otherwise it must have the same structure as elems. elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn. initializer: (optional) A tensor or (possibly nested) sequence of tensors, initial value for the accumulator, and the expected output type of fn. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. infer_shape: (optional) False disables tests for consistent output shapes. name: (optional) Name prefix for the returned tensors.

Returns: A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, and the previous accumulator value(s), from first to last.

Raises: TypeError: if fn is not callable or the structure of the output of fn and initializer do not match. ValueError: if the lengths of the output of fn and initializer do not match.

Examples: python elems = np.array([1, 2, 3, 4, 5, 6]) sum = scan(lambda a, x: a + x, elems) # sum == [1, 3, 6, 10, 15, 21]

python elems = np.array([1, 2, 3, 4, 5, 6]) initializer = np.array(0) sum_one = scan( lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer) # sum_one == [1, 2, 3, 4, 5, 6]

python elems = np.array([1, 0, 0, 0, 0, 0]) initializer = (np.array(0), np.array(1)) fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_add(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_add, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_add

Return

Applicative

Origial documentation for Builder.scatter_add

def scatter_add(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scatter_add to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scatter_add

def scatter_add(ref, indices, updates, use_locking=None, name=None)

Adds sparse updates to a variable reference.

This operation computes

# Scalar indices
ref[indices, ...] += updates[...]

# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to add to ref. use_locking: An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_add_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_add_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_add_layer

Return

Applicative

Origial documentation for Builder.scatter_add_layer

def scatter_add_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scatter_add, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scatter_add

def scatter_add(ref, indices, updates, use_locking=None, name=None):

Adds sparse updates to a variable reference.

This operation computes

# Scalar indices
ref[indices, ...] += updates[...]

# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to add to ref. use_locking: An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_div(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_div, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_div

Return

Applicative

Origial documentation for Builder.scatter_div

def scatter_div(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scatter_div to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scatter_div

def scatter_div(ref, indices, updates, use_locking=None, name=None)

Divides a variable reference by sparse updates.

This operation computes

# Scalar indices
ref[indices, ...] /= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] /= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions divide.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of values that ref is divided by. use_locking: An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_div_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_div_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_div_layer

Return

Applicative

Origial documentation for Builder.scatter_div_layer

def scatter_div_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scatter_div, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scatter_div

def scatter_div(ref, indices, updates, use_locking=None, name=None):

Divides a variable reference by sparse updates.

This operation computes

# Scalar indices
ref[indices, ...] /= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] /= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions divide.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of values that ref is divided by. use_locking: An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_mul(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_mul, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_mul

Return

Applicative

Origial documentation for Builder.scatter_mul

def scatter_mul(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scatter_mul to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scatter_mul

def scatter_mul(ref, indices, updates, use_locking=None, name=None)

Multiplies sparse updates into a variable reference.

This operation computes

# Scalar indices
ref[indices, ...] *= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] *= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to multiply to ref. use_locking: An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_mul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_mul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_mul_layer

Return

Applicative

Origial documentation for Builder.scatter_mul_layer

def scatter_mul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scatter_mul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scatter_mul

def scatter_mul(ref, indices, updates, use_locking=None, name=None):

Multiplies sparse updates into a variable reference.

This operation computes

# Scalar indices
ref[indices, ...] *= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] *= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to multiply to ref. use_locking: An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_sub(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_sub, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_sub

Return

Applicative

Origial documentation for Builder.scatter_sub

def scatter_sub(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scatter_sub to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scatter_sub

def scatter_sub(ref, indices, updates, use_locking=None, name=None)

Subtracts sparse updates to a variable reference.

# Scalar indices
ref[indices, ...] -= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] -= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their (negated) contributions add.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref. use_locking: An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_sub_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_sub_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_sub_layer

Return

Applicative

Origial documentation for Builder.scatter_sub_layer

def scatter_sub_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scatter_sub, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scatter_sub

def scatter_sub(ref, indices, updates, use_locking=None, name=None):

Subtracts sparse updates to a variable reference.

# Scalar indices
ref[indices, ...] -= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] -= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their (negated) contributions add.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref. use_locking: An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_update(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_update, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_update

Return

Applicative

Origial documentation for Builder.scatter_update

def scatter_update(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.scatter_update to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.scatter_update

def scatter_update(ref, indices, updates, use_locking=None, name=None)

Applies sparse updates to a variable reference.

This operation computes

# Scalar indices
ref[indices, ...] = updates[...]

# Vector indices (for each i)
ref[indices[i], ...] = updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

If values in ref is to be updated more than once, because there are duplicate entires in indices, the order at which the updates happen for each value is undefined.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to store in ref. use_locking: An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def scatter_update_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.scatter_update_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.scatter_update_layer

Return

Applicative

Origial documentation for Builder.scatter_update_layer

def scatter_update_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.scatter_update, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.scatter_update

def scatter_update(ref, indices, updates, use_locking=None, name=None):

Applies sparse updates to a variable reference.

This operation computes

# Scalar indices
ref[indices, ...] = updates[...]

# Vector indices (for each i)
ref[indices[i], ...] = updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]

This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

If values in ref is to be updated more than once, because there are duplicate entires in indices, the order at which the updates happen for each value is undefined.

Requires updates.shape = indices.shape + ref.shape[1:].

Args: ref: A mutable Tensor. Should be from a Variable node. indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates: A Tensor. Must have the same type as ref. A tensor of updated values to store in ref. use_locking: An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name: A name for the operation (optional).

Returns: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_max(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_max, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_max

Return

Applicative

Origial documentation for Builder.segment_max

def segment_max(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.segment_max to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.segment_max

def segment_max(data, segment_ids, name=None)

Computes the maximum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \max_j(data_j)\) where max is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_max_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_max_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_max_layer

Return

Applicative

Origial documentation for Builder.segment_max_layer

def segment_max_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.segment_max, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.segment_max

def segment_max(data, segment_ids, name=None):

Computes the maximum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \max_j(data_j)\) where max is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_mean(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_mean, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_mean

Return

Applicative

Origial documentation for Builder.segment_mean

def segment_mean(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.segment_mean to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.segment_mean

def segment_mean(data, segment_ids, name=None)

Computes the mean along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \frac{\sum_j data_j}{N}\) where mean is over j such that segment_ids[j] == i and N is the total number of values summed.

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_mean_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_mean_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_mean_layer

Return

Applicative

Origial documentation for Builder.segment_mean_layer

def segment_mean_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.segment_mean, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.segment_mean

def segment_mean(data, segment_ids, name=None):

Computes the mean along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \frac{\sum_j data_j}{N}\) where mean is over j such that segment_ids[j] == i and N is the total number of values summed.

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_min(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_min, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_min

Return

Applicative

Origial documentation for Builder.segment_min

def segment_min(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.segment_min to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.segment_min

def segment_min(data, segment_ids, name=None)

Computes the minimum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \min_j(data_j)\) where min is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_min_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_min_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_min_layer

Return

Applicative

Origial documentation for Builder.segment_min_layer

def segment_min_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.segment_min, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.segment_min

def segment_min(data, segment_ids, name=None):

Computes the minimum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \min_j(data_j)\) where min is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_prod(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_prod, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_prod

Return

Applicative

Origial documentation for Builder.segment_prod

def segment_prod(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.segment_prod to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.segment_prod

def segment_prod(data, segment_ids, name=None)

Computes the product along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_prod_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_prod_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_prod_layer

Return

Applicative

Origial documentation for Builder.segment_prod_layer

def segment_prod_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.segment_prod, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.segment_prod

def segment_prod(data, segment_ids, name=None):

Computes the product along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_sum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_sum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_sum

Return

Applicative

Origial documentation for Builder.segment_sum

def segment_sum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.segment_sum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.segment_sum

def segment_sum(data, segment_ids, name=None)

Computes the sum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def segment_sum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.segment_sum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.segment_sum_layer

Return

Applicative

Origial documentation for Builder.segment_sum_layer

def segment_sum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.segment_sum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.segment_sum

def segment_sum(data, segment_ids, name=None):

Computes the sum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over j such that segment_ids[j] == i.

Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose rank is equal to the rank of data's first dimension. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def select(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.select, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.select

Return

Applicative

Origial documentation for Builder.select

def select(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.select to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.select

def select(condition, t, e, name=None)

Selects elements from t or e, depending on condition.

The t, and e tensors must all have the same shape, and the output will also have that shape. The condition tensor must be a scalar if t and e are scalars. If t and e are vectors or higher rank, then condition must be either a vector with size matching the first dimension of t, or must have the same shape as t.

The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from t (if true) or e (if false).

If condition is a vector and t and e are higher rank matrices, then it chooses which row (outer dimension) to copy from t and e. If condition has the same shape as t and e, then it chooses which element to copy from t and e.

For example:

```prettyprint

'condition' tensor is [[True, False]

[False, True]]

't' is [[1, 2],

[3, 4]]

'e' is [[5, 6],

[7, 8]]

select(condition, t, e) ==> [[1, 6], [7, 4]]

'condition' tensor is [True, False]

't' is [[1, 2],

[3, 4]]

'e' is [[5, 6],

[7, 8]]

select(condition, t, e) ==> [[1, 2], [7, 8]]

```

Args: condition: A Tensor of type bool. t: A Tensor which may have the same shape as condition. If condition is rank 1, t may have higher rank, but its first dimension must match the size of condition. e: A Tensor with the same type and shape as t. name: A name for the operation (optional).

Returns: A Tensor with the same type and shape as t and e.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def select_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.select_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.select_layer

Return

Applicative

Origial documentation for Builder.select_layer

def select_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.select, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.select

def select(condition, t, e, name=None):

Selects elements from t or e, depending on condition.

The t, and e tensors must all have the same shape, and the output will also have that shape. The condition tensor must be a scalar if t and e are scalars. If t and e are vectors or higher rank, then condition must be either a vector with size matching the first dimension of t, or must have the same shape as t.

The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from t (if true) or e (if false).

If condition is a vector and t and e are higher rank matrices, then it chooses which row (outer dimension) to copy from t and e. If condition has the same shape as t and e, then it chooses which element to copy from t and e.

For example:

```prettyprint

'condition' tensor is [[True, False]

[False, True]]

't' is [[1, 2],

[3, 4]]

'e' is [[5, 6],

[7, 8]]

select(condition, t, e) ==> [[1, 6], [7, 4]]

'condition' tensor is [True, False]

't' is [[1, 2],

[3, 4]]

'e' is [[5, 6],

[7, 8]]

select(condition, t, e) ==> [[1, 2], [7, 8]]

```

Args: condition: A Tensor of type bool. t: A Tensor which may have the same shape as condition. If condition is rank 1, t may have higher rank, but its first dimension must match the size of condition. e: A Tensor with the same type and shape as t. name: A name for the operation (optional).

Returns: A Tensor with the same type and shape as t and e.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def self_adjoint_eig(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.self_adjoint_eig, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.self_adjoint_eig

Return

Applicative

Origial documentation for Builder.self_adjoint_eig

def self_adjoint_eig(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.self_adjoint_eig to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.self_adjoint_eig

def self_adjoint_eig(tensor, name=None)

Computes the eigen decomposition of a batch of self-adjoint matrices.

Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for i=0...N-1.

Args: tensor: Tensor of shape [..., N, N]. Only the lower triangular part of each inner inner matrix is referenced. name: string, optional name of the operation.

Returns: e: Eigenvalues. Shape is [..., N]. v: Eigenvectors. Shape is [..., N, N]. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in tensor

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def self_adjoint_eig_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.self_adjoint_eig_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.self_adjoint_eig_layer

Return

Applicative

Origial documentation for Builder.self_adjoint_eig_layer

def self_adjoint_eig_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.self_adjoint_eig, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.self_adjoint_eig

def self_adjoint_eig(tensor, name=None):

Computes the eigen decomposition of a batch of self-adjoint matrices.

Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for i=0...N-1.

Args: tensor: Tensor of shape [..., N, N]. Only the lower triangular part of each inner inner matrix is referenced. name: string, optional name of the operation.

Returns: e: Eigenvalues. Shape is [..., N]. v: Eigenvectors. Shape is [..., N, N]. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in tensor

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def self_adjoint_eigvals(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.self_adjoint_eigvals, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.self_adjoint_eigvals

Return

Applicative

Origial documentation for Builder.self_adjoint_eigvals

def self_adjoint_eigvals(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.self_adjoint_eigvals to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.self_adjoint_eigvals

def self_adjoint_eigvals(tensor, name=None)

Computes the eigenvalues of one or more self-adjoint matrices.

Args: tensor: Tensor of shape [..., N, N]. name: string, optional name of the operation.

Returns: e: Eigenvalues. Shape is [..., N]. The vector e[..., :] contains the N eigenvalues of tensor[..., :, :].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def self_adjoint_eigvals_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.self_adjoint_eigvals_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.self_adjoint_eigvals_layer

Return

Applicative

Origial documentation for Builder.self_adjoint_eigvals_layer

def self_adjoint_eigvals_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.self_adjoint_eigvals, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.self_adjoint_eigvals

def self_adjoint_eigvals(tensor, name=None):

Computes the eigenvalues of one or more self-adjoint matrices.

Args: tensor: Tensor of shape [..., N, N]. name: string, optional name of the operation.

Returns: e: Eigenvalues. Shape is [..., N]. The vector e[..., :] contains the N eigenvalues of tensor[..., :, :].

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def separable_conv2d(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.separable_conv2d, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.separable_conv2d

Return

Applicative

Origial documentation for Builder.separable_conv2d

def separable_conv2d(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.separable_conv2d to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.separable_conv2d

def separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None)

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions [1, 2] and 3, not spatial separability between dimensions 1 and 2.

In detail,

output[b, i, j, k] = sum_{di, dj, q, r]
    input[b, strides[1] * i + di, strides[2] * j + dj, q] *
    depthwise_filter[di, dj, q, r] *
    pointwise_filter[0, 0, q * channel_multiplier + r, k]

strides controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of [1, 1, 1, 1]. Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1].

Args: input: 4-D Tensor with shape [batch, in_height, in_width, in_channels]. depthwise_filter: 4-D Tensor with shape [filter_height, filter_width, in_channels, channel_multiplier]. Contains in_channels convolutional filters of depth 1. pointwise_filter: 4-D Tensor with shape [1, 1, channel_multiplier * in_channels, out_channels]. Pointwise filter to mix channels after depthwise_filter has convolved spatially. strides: 1-D of size 4. The strides for the depthwise convolution for each dimension of input. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: A name for this operation (optional).

Returns: A 4-D Tensor of shape [batch, out_height, out_width, out_channels].

Raises: ValueError: If channel_multiplier * in_channels > out_channels, which means that the separable convolution is overparameterized.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def separable_conv2d_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.separable_conv2d_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.separable_conv2d_layer

Return

Applicative

Origial documentation for Builder.separable_conv2d_layer

def separable_conv2d_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.separable_conv2d, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.separable_conv2d

def separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None):

2-D convolution with separable filters.

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions [1, 2] and 3, not spatial separability between dimensions 1 and 2.

In detail,

output[b, i, j, k] = sum_{di, dj, q, r]
    input[b, strides[1] * i + di, strides[2] * j + dj, q] *
    depthwise_filter[di, dj, q, r] *
    pointwise_filter[0, 0, q * channel_multiplier + r, k]

strides controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of [1, 1, 1, 1]. Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1].

Args: input: 4-D Tensor with shape [batch, in_height, in_width, in_channels]. depthwise_filter: 4-D Tensor with shape [filter_height, filter_width, in_channels, channel_multiplier]. Contains in_channels convolutional filters of depth 1. pointwise_filter: 4-D Tensor with shape [1, 1, channel_multiplier * in_channels, out_channels]. Pointwise filter to mix channels after depthwise_filter has convolved spatially. strides: 1-D of size 4. The strides for the depthwise convolution for each dimension of input. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. See the comment here name: A name for this operation (optional).

Returns: A 4-D Tensor of shape [batch, out_height, out_width, out_channels].

Raises: ValueError: If channel_multiplier * in_channels > out_channels, which means that the separable convolution is overparameterized.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sequence_mask(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sequence_mask, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sequence_mask

Return

Applicative

Origial documentation for Builder.sequence_mask

def sequence_mask(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sequence_mask to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sequence_mask

def sequence_mask(lengths, maxlen=None, dtype=<dtype: 'bool'>, name=None)

Return a mask tensor representing the first N positions of each row.

Example: python tf.sequence_mask([1, 3, 2], 5) = [[True, False, False, False, False], [True, True, True, False, False], [True, True, False, False, False]]

Args: lengths: 1D integer tensor, all its values < maxlen. maxlen: scalar integer tensor, maximum length of each row. Default: use maximum over lengths. dtype: output type of the resulting tensor. name: name of the op. Returns: A 2D mask tensor, as shown in the example above, cast to specified dtype.

Raises: ValueError: if the arguments have invalid rank.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sequence_mask_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sequence_mask_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sequence_mask_layer

Return

Applicative

Origial documentation for Builder.sequence_mask_layer

def sequence_mask_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sequence_mask, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sequence_mask

def sequence_mask(lengths, maxlen=None, dtype=<dtype: 'bool'>, name=None):

Return a mask tensor representing the first N positions of each row.

Example: python tf.sequence_mask([1, 3, 2], 5) = [[True, False, False, False, False], [True, True, True, False, False], [True, True, False, False, False]]

Args: lengths: 1D integer tensor, all its values < maxlen. maxlen: scalar integer tensor, maximum length of each row. Default: use maximum over lengths. dtype: output type of the resulting tensor. name: name of the op. Returns: A 2D mask tensor, as shown in the example above, cast to specified dtype.

Raises: ValueError: if the arguments have invalid rank.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def serialize_many_sparse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.serialize_many_sparse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.serialize_many_sparse

Return

Applicative

Origial documentation for Builder.serialize_many_sparse

def serialize_many_sparse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.serialize_many_sparse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.serialize_many_sparse

def serialize_many_sparse(sp_input, name=None)

Serialize an N-minibatch SparseTensor into an [N, 3] string Tensor.

The SparseTensor must have rank R greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the SparseTensor must be sorted in increasing order of this first dimension. The serialized SparseTensor objects going into each row of the output Tensor will have rank R-1.

The minibatch size N is extracted from sparse_shape[0].

Args: sp_input: The input rank R SparseTensor. name: A name prefix for the returned tensors (optional).

Returns: A string matrix (2-D Tensor) with N rows and 3 columns. Each column represents serialized SparseTensor's indices, values, and shape (respectively).

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def serialize_many_sparse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.serialize_many_sparse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.serialize_many_sparse_layer

Return

Applicative

Origial documentation for Builder.serialize_many_sparse_layer

def serialize_many_sparse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.serialize_many_sparse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.serialize_many_sparse

def serialize_many_sparse(sp_input, name=None):

Serialize an N-minibatch SparseTensor into an [N, 3] string Tensor.

The SparseTensor must have rank R greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the SparseTensor must be sorted in increasing order of this first dimension. The serialized SparseTensor objects going into each row of the output Tensor will have rank R-1.

The minibatch size N is extracted from sparse_shape[0].

Args: sp_input: The input rank R SparseTensor. name: A name prefix for the returned tensors (optional).

Returns: A string matrix (2-D Tensor) with N rows and 3 columns. Each column represents serialized SparseTensor's indices, values, and shape (respectively).

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def serialize_sparse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.serialize_sparse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.serialize_sparse

Return

Applicative

Origial documentation for Builder.serialize_sparse

def serialize_sparse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.serialize_sparse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.serialize_sparse

def serialize_sparse(sp_input, name=None)

Serialize a SparseTensor into a string 3-vector (1-D Tensor) object.

Args: sp_input: The input SparseTensor. name: A name prefix for the returned tensors (optional).

Returns: A string 3-vector (1D Tensor), with each column representing the serialized SparseTensor's indices, values, and shape (respectively).

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def serialize_sparse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.serialize_sparse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.serialize_sparse_layer

Return

Applicative

Origial documentation for Builder.serialize_sparse_layer

def serialize_sparse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.serialize_sparse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.serialize_sparse

def serialize_sparse(sp_input, name=None):

Serialize a SparseTensor into a string 3-vector (1-D Tensor) object.

Args: sp_input: The input SparseTensor. name: A name prefix for the returned tensors (optional).

Returns: A string 3-vector (1D Tensor), with each column representing the serialized SparseTensor's indices, values, and shape (respectively).

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def set_random_seed(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.set_random_seed, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.set_random_seed

Return

Applicative

Origial documentation for Builder.set_random_seed

def set_random_seed(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.set_random_seed to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.set_random_seed

def set_random_seed(seed)

Sets the graph-level random seed.

Operations that rely on a random seed actually derive it from two seeds: the graph-level and operation-level seeds. This sets the graph-level seed.

Its interactions with operation-level seeds is as follows:

  1. If neither the graph-level nor the operation seed is set: A random seed is used for this op.
  2. If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.
  3. If the graph-level seed is not set, but the operation seed is set: A default graph-level seed and the specified operation seed are used to determine the random sequence.
  4. If both the graph-level and the operation seed are set: Both seeds are used in conjunction to determine the random sequence.

To illustrate the user-visible effects, consider these examples:

To generate different sequences across sessions, set neither graph-level nor op-level seeds:

```python a = tf.random_uniform([1]) b = tf.random_normal([1])

print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'

print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A3' print(sess2.run(a)) # generates 'A4' print(sess2.run(b)) # generates 'B3' print(sess2.run(b)) # generates 'B4' ```

To generate the same repeatable sequence for an op across sessions, set the seed for the op:

```python a = tf.random_uniform([1], seed=1) b = tf.random_normal([1])

Repeatedly running this block with the same graph will generate the same

sequence of values for 'a', but different sequences of values for 'b'.

print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'

print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A1' print(sess2.run(a)) # generates 'A2' print(sess2.run(b)) # generates 'B3' print(sess2.run(b)) # generates 'B4' ```

To make the random sequences generated by all ops be repeatable across sessions, set a graph-level seed:

```python tf.set_random_seed(1234) a = tf.random_uniform([1]) b = tf.random_normal([1])

Repeatedly running this block with the same graph will generate different

sequences of 'a' and 'b'.

print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'

print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A1' print(sess2.run(a)) # generates 'A2' print(sess2.run(b)) # generates 'B1' print(sess2.run(b)) # generates 'B2' ```

Args: seed: integer.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def set_random_seed_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.set_random_seed_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.set_random_seed_layer

Return

Applicative

Origial documentation for Builder.set_random_seed_layer

def set_random_seed_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.set_random_seed, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.set_random_seed

def set_random_seed(seed):

Sets the graph-level random seed.

Operations that rely on a random seed actually derive it from two seeds: the graph-level and operation-level seeds. This sets the graph-level seed.

Its interactions with operation-level seeds is as follows:

  1. If neither the graph-level nor the operation seed is set: A random seed is used for this op.
  2. If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.
  3. If the graph-level seed is not set, but the operation seed is set: A default graph-level seed and the specified operation seed are used to determine the random sequence.
  4. If both the graph-level and the operation seed are set: Both seeds are used in conjunction to determine the random sequence.

To illustrate the user-visible effects, consider these examples:

To generate different sequences across sessions, set neither graph-level nor op-level seeds:

```python a = tf.random_uniform([1]) b = tf.random_normal([1])

print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'

print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A3' print(sess2.run(a)) # generates 'A4' print(sess2.run(b)) # generates 'B3' print(sess2.run(b)) # generates 'B4' ```

To generate the same repeatable sequence for an op across sessions, set the seed for the op:

```python a = tf.random_uniform([1], seed=1) b = tf.random_normal([1])

Repeatedly running this block with the same graph will generate the same

sequence of values for 'a', but different sequences of values for 'b'.

print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'

print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A1' print(sess2.run(a)) # generates 'A2' print(sess2.run(b)) # generates 'B3' print(sess2.run(b)) # generates 'B4' ```

To make the random sequences generated by all ops be repeatable across sessions, set a graph-level seed:

```python tf.set_random_seed(1234) a = tf.random_uniform([1]) b = tf.random_normal([1])

Repeatedly running this block with the same graph will generate different

sequences of 'a' and 'b'.

print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'

print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A1' print(sess2.run(a)) # generates 'A2' print(sess2.run(b)) # generates 'B1' print(sess2.run(b)) # generates 'B2' ```

Args: seed: integer.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def shape(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.shape, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.shape

Return

Applicative

Origial documentation for Builder.shape

def shape(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.shape to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.shape

def shape(input, name=None, out_type=<dtype: 'int32'>)

Returns the shape of a tensor.

This operation returns a 1-D integer tensor representing the shape of input.

For example:

```python

't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]

shape(t) ==> [2, 2, 3] ```

Args: input: A Tensor or SparseTensor. name: A name for the operation (optional). out_type: (Optional) The specified output type of the operation (int32 or int64). Defaults to tf.int32.

Returns: A Tensor of type out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def shape_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.shape_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.shape_layer

Return

Applicative

Origial documentation for Builder.shape_layer

def shape_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.shape, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.shape

def shape(input, name=None, out_type=<dtype: 'int32'>):

Returns the shape of a tensor.

This operation returns a 1-D integer tensor representing the shape of input.

For example:

```python

't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]

shape(t) ==> [2, 2, 3] ```

Args: input: A Tensor or SparseTensor. name: A name for the operation (optional). out_type: (Optional) The specified output type of the operation (int32 or int64). Defaults to tf.int32.

Returns: A Tensor of type out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def shape_n(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.shape_n, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.shape_n

Return

Applicative

Origial documentation for Builder.shape_n

def shape_n(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.shape_n to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.shape_n

def shape_n(input, out_type=None, name=None)

Returns shape of tensors.

This operation returns N 1-D integer tensors representing shape of input[i]s.

Args: input: A list of at least 1 Tensor objects of the same type. out_type: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A list with the same number of Tensor objects as input of Tensor objects of type out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def shape_n_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.shape_n_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.shape_n_layer

Return

Applicative

Origial documentation for Builder.shape_n_layer

def shape_n_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.shape_n, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.shape_n

def shape_n(input, out_type=None, name=None):

Returns shape of tensors.

This operation returns N 1-D integer tensors representing shape of input[i]s.

Args: input: A list of at least 1 Tensor objects of the same type. out_type: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A list with the same number of Tensor objects as input of Tensor objects of type out_type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sigmoid(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sigmoid, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sigmoid

Return

Applicative

Origial documentation for Builder.sigmoid

def sigmoid(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sigmoid to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sigmoid

def sigmoid(x, name=None)

Computes sigmoid of x element-wise.

Specifically, y = 1 / (1 + exp(-x)).

Args: x: A Tensor with type float32, float64, int32, complex64, int64, or qint32. name: A name for the operation (optional).

Returns: A Tensor with the same type as x if x.dtype != qint32 otherwise the return type is quint8.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sigmoid_cross_entropy_with_logits(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sigmoid_cross_entropy_with_logits, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sigmoid_cross_entropy_with_logits

Return

Applicative

Origial documentation for Builder.sigmoid_cross_entropy_with_logits

def sigmoid_cross_entropy_with_logits(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.sigmoid_cross_entropy_with_logits to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.sigmoid_cross_entropy_with_logits

def sigmoid_cross_entropy_with_logits(logits, targets, name=None)

Computes sigmoid cross entropy given logits.

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.

For brevity, let x = logits, z = targets. The logistic loss is

  z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + log(1 + exp(-x))
= x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

  x - x * z + log(1 + exp(-x))
= log(exp(x)) - x * z + log(1 + exp(-x))
= - x * z + log(1 + exp(x))

Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation

max(x, 0) - x * z + log(1 + exp(-abs(x)))

logits and targets must have the same type and shape.

Args: logits: A Tensor of type float32 or float64. targets: A Tensor of the same type and shape as logits. name: A name for the operation (optional).

Returns: A Tensor of the same shape as logits with the componentwise logistic losses.

Raises: ValueError: If logits and targets do not have the same shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sigmoid_cross_entropy_with_logits_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sigmoid_cross_entropy_with_logits_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sigmoid_cross_entropy_with_logits_layer

Return

Applicative

Origial documentation for Builder.sigmoid_cross_entropy_with_logits_layer

def sigmoid_cross_entropy_with_logits_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.sigmoid_cross_entropy_with_logits, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.sigmoid_cross_entropy_with_logits

def sigmoid_cross_entropy_with_logits(logits, targets, name=None):

Computes sigmoid cross entropy given logits.

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.

For brevity, let x = logits, z = targets. The logistic loss is

  z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + log(1 + exp(-x))
= x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

  x - x * z + log(1 + exp(-x))
= log(exp(x)) - x * z + log(1 + exp(-x))
= - x * z + log(1 + exp(x))

Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation

max(x, 0) - x * z + log(1 + exp(-abs(x)))

logits and targets must have the same type and shape.

Args: logits: A Tensor of type float32 or float64. targets: A Tensor of the same type and shape as logits. name: A name for the operation (optional).

Returns: A Tensor of the same shape as logits with the componentwise logistic losses.

Raises: ValueError: If logits and targets do not have the same shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sigmoid_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sigmoid_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sigmoid_layer

Return

Applicative

Origial documentation for Builder.sigmoid_layer

def sigmoid_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sigmoid, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sigmoid

def sigmoid(x, name=None):

Computes sigmoid of x element-wise.

Specifically, y = 1 / (1 + exp(-x)).

Args: x: A Tensor with type float32, float64, int32, complex64, int64, or qint32. name: A name for the operation (optional).

Returns: A Tensor with the same type as x if x.dtype != qint32 otherwise the return type is quint8.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sign(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sign, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sign

Return

Applicative

Origial documentation for Builder.sign

def sign(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sign to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sign

def sign(x, name=None)

Returns an element-wise indication of the sign of a number.

y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0.

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sign_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sign_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sign_layer

Return

Applicative

Origial documentation for Builder.sign_layer

def sign_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sign, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sign

def sign(x, name=None):

Returns an element-wise indication of the sign of a number.

y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0.

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sin(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sin, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sin

Return

Applicative

Origial documentation for Builder.sin

def sin(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sin to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sin

def sin(x, name=None)

Computes sin of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sin_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sin_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sin_layer

Return

Applicative

Origial documentation for Builder.sin_layer

def sin_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sin, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sin

def sin(x, name=None):

Computes sin of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def size(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.size, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.size

Return

Applicative

Origial documentation for Builder.size

def size(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.size to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.size

def size(input, name=None, out_type=<dtype: 'int32'>)

Returns the size of a tensor.

This operation returns an integer representing the number of elements in input.

For example:

```python

't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]

size(t) ==> 12 ```

Args: input: A Tensor or SparseTensor. name: A name for the operation (optional). out_type: (Optional) The specified output type of the operation (int32 or int64). Defaults to tf.int32.

Returns: A Tensor of type out_type. Defaults to tf.int32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def size_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.size_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.size_layer

Return

Applicative

Origial documentation for Builder.size_layer

def size_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.size, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.size

def size(input, name=None, out_type=<dtype: 'int32'>):

Returns the size of a tensor.

This operation returns an integer representing the number of elements in input.

For example:

```python

't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]

size(t) ==> 12 ```

Args: input: A Tensor or SparseTensor. name: A name for the operation (optional). out_type: (Optional) The specified output type of the operation (int32 or int64). Defaults to tf.int32.

Returns: A Tensor of type out_type. Defaults to tf.int32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def slice(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.slice, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.slice

Return

Applicative

Origial documentation for Builder.slice

def slice(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.slice to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.slice

def slice(input_, begin, size, name=None)

Extracts a slice from a tensor.

This operation extracts a slice of size size from a tensor input starting at the location specified by begin. The slice size is represented as a tensor shape, where size[i] is the number of elements of the 'i'th dimension of input that you want to slice. The starting location (begin) for the slice is represented as an offset in each dimension of input. In other words, begin[i] is the offset into the 'i'th dimension of input that you want to slice from.

begin is zero-based; size is one-based. If size[i] is -1, all remaining elements in dimension i are included in the slice. In other words, this is equivalent to setting:

size[i] = input.dim_size(i) - begin[i]

This operation requires that:

0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]

For example:

```

'input' is [[[1, 1, 1], [2, 2, 2]],

[[3, 3, 3], [4, 4, 4]],

[[5, 5, 5], [6, 6, 6]]]

tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], [[5, 5, 5]]] ```

Args: input_: A Tensor. begin: An int32 or int64 Tensor. size: An int32 or int64 Tensor. name: A name for the operation (optional).

Returns: A Tensor the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def slice_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.slice_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.slice_layer

Return

Applicative

Origial documentation for Builder.slice_layer

def slice_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.slice, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.slice

def slice(input_, begin, size, name=None):

Extracts a slice from a tensor.

This operation extracts a slice of size size from a tensor input starting at the location specified by begin. The slice size is represented as a tensor shape, where size[i] is the number of elements of the 'i'th dimension of input that you want to slice. The starting location (begin) for the slice is represented as an offset in each dimension of input. In other words, begin[i] is the offset into the 'i'th dimension of input that you want to slice from.

begin is zero-based; size is one-based. If size[i] is -1, all remaining elements in dimension i are included in the slice. In other words, this is equivalent to setting:

size[i] = input.dim_size(i) - begin[i]

This operation requires that:

0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]

For example:

```

'input' is [[[1, 1, 1], [2, 2, 2]],

[[3, 3, 3], [4, 4, 4]],

[[5, 5, 5], [6, 6, 6]]]

tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], [[5, 5, 5]]] ```

Args: input_: A Tensor. begin: An int32 or int64 Tensor. size: An int32 or int64 Tensor. name: A name for the operation (optional).

Returns: A Tensor the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softmax(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softmax, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softmax

Return

Applicative

Origial documentation for Builder.softmax

def softmax(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.softmax to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.softmax

def softmax(logits, dim=-1, name=None)

Computes log softmax activations.

For each batch i and class j we have

softmax = exp(logits) / reduce_sum(exp(logits), dim)

Args: logits: A non-empty Tensor. Must be one of the following types: half, float32, float64. dim: The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as logits. Same shape as logits. Raises: InvalidArgumentError: if logits is empty or dim is beyond the last dimension of logits.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softmax_cross_entropy_with_logits(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softmax_cross_entropy_with_logits, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softmax_cross_entropy_with_logits

Return

Applicative

Origial documentation for Builder.softmax_cross_entropy_with_logits

def softmax_cross_entropy_with_logits(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.softmax_cross_entropy_with_logits to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.softmax_cross_entropy_with_logits

def softmax_cross_entropy_with_logits(logits, labels, dim=-1, name=None)

Computes softmax cross entropy between logits and labels.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

NOTE: While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive labels (wherein one and only one class is true at a time), see sparse_softmax_cross_entropy_with_logits.

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

logits and labels must have the same shape [batch_size, num_classes] and the same dtype (either float16, float32, or float64).

Args: logits: Unscaled log probabilities. labels: Each row labels[i] must be a valid probability distribution. dim: The class dimension. Defaulted to -1 which is the last dimension. name: A name for the operation (optional).

Returns: A 1-D Tensor of length batch_size of the same type as logits with the softmax cross entropy loss.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softmax_cross_entropy_with_logits_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softmax_cross_entropy_with_logits_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softmax_cross_entropy_with_logits_layer

Return

Applicative

Origial documentation for Builder.softmax_cross_entropy_with_logits_layer

def softmax_cross_entropy_with_logits_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.softmax_cross_entropy_with_logits, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.softmax_cross_entropy_with_logits

def softmax_cross_entropy_with_logits(logits, labels, dim=-1, name=None):

Computes softmax cross entropy between logits and labels.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

NOTE: While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

If using exclusive labels (wherein one and only one class is true at a time), see sparse_softmax_cross_entropy_with_logits.

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

logits and labels must have the same shape [batch_size, num_classes] and the same dtype (either float16, float32, or float64).

Args: logits: Unscaled log probabilities. labels: Each row labels[i] must be a valid probability distribution. dim: The class dimension. Defaulted to -1 which is the last dimension. name: A name for the operation (optional).

Returns: A 1-D Tensor of length batch_size of the same type as logits with the softmax cross entropy loss.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softmax_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softmax_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softmax_layer

Return

Applicative

Origial documentation for Builder.softmax_layer

def softmax_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.softmax, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.softmax

def softmax(logits, dim=-1, name=None):

Computes log softmax activations.

For each batch i and class j we have

softmax = exp(logits) / reduce_sum(exp(logits), dim)

Args: logits: A non-empty Tensor. Must be one of the following types: half, float32, float64. dim: The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as logits. Same shape as logits. Raises: InvalidArgumentError: if logits is empty or dim is beyond the last dimension of logits.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softplus(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softplus, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softplus

Return

Applicative

Origial documentation for Builder.softplus

def softplus(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.softplus to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.softplus

def softplus(features, name=None)

Computes softplus: log(exp(features) + 1).

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softplus_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softplus_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softplus_layer

Return

Applicative

Origial documentation for Builder.softplus_layer

def softplus_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.softplus, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.softplus

def softplus(features, name=None):

Computes softplus: log(exp(features) + 1).

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softsign(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softsign, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softsign

Return

Applicative

Origial documentation for Builder.softsign

def softsign(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.softsign to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.softsign

def softsign(features, name=None)

Computes softsign: features / (abs(features) + 1).

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def softsign_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.softsign_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.softsign_layer

Return

Applicative

Origial documentation for Builder.softsign_layer

def softsign_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.softsign, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.softsign

def softsign(features, name=None):

Computes softsign: features / (abs(features) + 1).

Args: features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as features.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def space_to_batch(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.space_to_batch, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.space_to_batch

Return

Applicative

Origial documentation for Builder.space_to_batch

def space_to_batch(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.space_to_batch to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.space_to_batch

def space_to_batch(input, paddings, block_size, name=None)

SpaceToBatch for 4-D tensors of type T.

This is a legacy version of the more general SpaceToBatchND.

Zero-pads and then rearranges (permutes) blocks of spatial data into batch. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. After the zero-padding, both height and width of the input must be divisible by the block size.

Args: input: A Tensor. 4-D with shape [batch, height, width, depth]. paddings: A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies the padding of the input with zeros across the spatial dimensions as follows:

      paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]

  The effective spatial dimensions of the zero-padded input tensor will be:

      height_pad = pad_top + height + pad_bottom
      width_pad = pad_left + width + pad_right

The attr `block_size` must be greater than one. It indicates the block size.

  * Non-overlapping blocks of size `block_size x block size` in the height and
    width dimensions are rearranged into the batch dimension at each location.
  * The batch of the output tensor is `batch * block_size * block_size`.
  * Both height_pad and width_pad must be divisible by block_size.

The shape of the output will be:

    [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
     depth]

Some examples:

(1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:

```prettyprint
x = [[[[1], [2]], [[3], [4]]]]
```

The output tensor has shape `[4, 1, 1, 1]` and value:

```prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```

(2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:

```prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]
```

The output tensor has shape `[4, 1, 1, 3]` and value:

```prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```

(3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]],
      [[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[4, 2, 2, 1]` and value:

```prettyprint
x = [[[[1], [3]], [[5], [7]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]
```

(4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[8, 1, 2, 1]` and value:

```prettyprint
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
     [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
```

Among others, this operation is useful for reducing atrous convolution into
regular convolution.

block_size: An int that is >= 2. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def space_to_batch_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.space_to_batch_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.space_to_batch_layer

Return

Applicative

Origial documentation for Builder.space_to_batch_layer

def space_to_batch_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.space_to_batch, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.space_to_batch

def space_to_batch(input, paddings, block_size, name=None):

SpaceToBatch for 4-D tensors of type T.

This is a legacy version of the more general SpaceToBatchND.

Zero-pads and then rearranges (permutes) blocks of spatial data into batch. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. After the zero-padding, both height and width of the input must be divisible by the block size.

Args: input: A Tensor. 4-D with shape [batch, height, width, depth]. paddings: A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies the padding of the input with zeros across the spatial dimensions as follows:

      paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]

  The effective spatial dimensions of the zero-padded input tensor will be:

      height_pad = pad_top + height + pad_bottom
      width_pad = pad_left + width + pad_right

The attr `block_size` must be greater than one. It indicates the block size.

  * Non-overlapping blocks of size `block_size x block size` in the height and
    width dimensions are rearranged into the batch dimension at each location.
  * The batch of the output tensor is `batch * block_size * block_size`.
  * Both height_pad and width_pad must be divisible by block_size.

The shape of the output will be:

    [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
     depth]

Some examples:

(1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:

```prettyprint
x = [[[[1], [2]], [[3], [4]]]]
```

The output tensor has shape `[4, 1, 1, 1]` and value:

```prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```

(2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:

```prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]
```

The output tensor has shape `[4, 1, 1, 3]` and value:

```prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```

(3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]],
      [[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[4, 2, 2, 1]` and value:

```prettyprint
x = [[[[1], [3]], [[5], [7]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]
```

(4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[8, 1, 2, 1]` and value:

```prettyprint
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
     [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
```

Among others, this operation is useful for reducing atrous convolution into
regular convolution.

block_size: An int that is >= 2. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def space_to_batch_nd(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.space_to_batch_nd, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.space_to_batch_nd

Return

Applicative

Origial documentation for Builder.space_to_batch_nd

def space_to_batch_nd(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.space_to_batch_nd to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.space_to_batch_nd

def space_to_batch_nd(input, block_shape, paddings, name=None)

SpaceToBatch for N-D tensors of type T.

This operation divides "spatial" dimensions [1, ..., M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the "batch" dimension (0) such that in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. See below for a precise description.

Args: input: A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions. block_shape: A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1. paddings: A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. paddings[i] = [pad_start, pad_end] specifies the padding for input dimension i + 1, which corresponds to spatial dimension i. It is required that block_shape[i] divides input_shape[i + 1] + pad_start + pad_end.

This operation is equivalent to the following steps:

1. Zero-pad the start and end of dimensions `[1, ..., M]` of the
   input according to `paddings` to produce `padded` of shape `padded_shape`.

2. Reshape `padded` to `reshaped_padded` of shape:
     [batch] +
     [padded_shape[1] / block_shape[0],
       block_shape[0],
      ...,
      padded_shape[M] / block_shape[M-1],
      block_shape[M-1]] +
     remaining_shape

3. Permute dimensions of `reshaped_padded` to produce
   `permuted_reshaped_padded` of shape:
     block_shape +
     [batch] +
     [padded_shape[1] / block_shape[0],
      ...,
      padded_shape[M] / block_shape[M-1]] +
     remaining_shape

4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch
   dimension, producing an output tensor of shape:
     [batch * prod(block_shape)] +
     [padded_shape[1] / block_shape[0],
      ...,
      padded_shape[M] / block_shape[M-1]] +
     remaining_shape

Some examples:

(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and
    `paddings = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1], [2]], [[3], [4]]]]
```

The output tensor has shape `[4, 1, 1, 1]` and value:

```prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```

(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and
    `paddings = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]
```

The output tensor has shape `[4, 1, 1, 3]` and value:

```prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```

(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and
    `paddings = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]],
      [[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[4, 2, 2, 1]` and value:

```prettyprint
x = [[[[1], [3]], [[5], [7]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]
```

(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and
    paddings = `[[0, 0], [2, 0]]`:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[8, 1, 3, 1]` and value:

```prettyprint
x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
     [[[0], [2], [4]]], [[[0], [10], [12]]],
     [[[0], [5], [7]]], [[[0], [13], [15]]],
     [[[0], [6], [8]]], [[[0], [14], [16]]]]
```

Among others, this operation is useful for reducing atrous convolution into
regular convolution.

name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def space_to_batch_nd_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.space_to_batch_nd_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.space_to_batch_nd_layer

Return

Applicative

Origial documentation for Builder.space_to_batch_nd_layer

def space_to_batch_nd_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.space_to_batch_nd, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.space_to_batch_nd

def space_to_batch_nd(input, block_shape, paddings, name=None):

SpaceToBatch for N-D tensors of type T.

This operation divides "spatial" dimensions [1, ..., M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the "batch" dimension (0) such that in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. See below for a precise description.

Args: input: A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions. block_shape: A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1. paddings: A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. paddings[i] = [pad_start, pad_end] specifies the padding for input dimension i + 1, which corresponds to spatial dimension i. It is required that block_shape[i] divides input_shape[i + 1] + pad_start + pad_end.

This operation is equivalent to the following steps:

1. Zero-pad the start and end of dimensions `[1, ..., M]` of the
   input according to `paddings` to produce `padded` of shape `padded_shape`.

2. Reshape `padded` to `reshaped_padded` of shape:
     [batch] +
     [padded_shape[1] / block_shape[0],
       block_shape[0],
      ...,
      padded_shape[M] / block_shape[M-1],
      block_shape[M-1]] +
     remaining_shape

3. Permute dimensions of `reshaped_padded` to produce
   `permuted_reshaped_padded` of shape:
     block_shape +
     [batch] +
     [padded_shape[1] / block_shape[0],
      ...,
      padded_shape[M] / block_shape[M-1]] +
     remaining_shape

4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch
   dimension, producing an output tensor of shape:
     [batch * prod(block_shape)] +
     [padded_shape[1] / block_shape[0],
      ...,
      padded_shape[M] / block_shape[M-1]] +
     remaining_shape

Some examples:

(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and
    `paddings = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1], [2]], [[3], [4]]]]
```

The output tensor has shape `[4, 1, 1, 1]` and value:

```prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```

(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and
    `paddings = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]
```

The output tensor has shape `[4, 1, 1, 3]` and value:

```prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```

(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and
    `paddings = [[0, 0], [0, 0]]`:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]],
      [[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[4, 2, 2, 1]` and value:

```prettyprint
x = [[[[1], [3]], [[5], [7]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]
```

(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and
    paddings = `[[0, 0], [2, 0]]`:

```prettyprint
x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]
```

The output tensor has shape `[8, 1, 3, 1]` and value:

```prettyprint
x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
     [[[0], [2], [4]]], [[[0], [10], [12]]],
     [[[0], [5], [7]]], [[[0], [13], [15]]],
     [[[0], [6], [8]]], [[[0], [14], [16]]]]
```

Among others, this operation is useful for reducing atrous convolution into
regular convolution.

name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def space_to_depth(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.space_to_depth, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.space_to_depth

Return

Applicative

Origial documentation for Builder.space_to_depth

def space_to_depth(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.space_to_depth to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.space_to_depth

def space_to_depth(input, block_size, name=None)

SpaceToDepth for tensors of type T.

Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. The attr block_size indicates the input block size and how the data is moved.

  • Non-overlapping blocks of size block_size x block size are rearranged into depth at each location.
  • The depth of the output tensor is input_depth * block_size * block_size.
  • The input tensor's height and width must be divisible by block_size.

That is, assuming the input is in the shape: [batch, height, width, depth], the shape of the output will be: [batch, height/block_size, width/block_size, depth*block_size*block_size]

This operation requires that the input tensor be of rank 4, and that block_size be >=1 and a divisor of both the input height and width.

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given this input of shape [1, 2, 2, 1], and block_size of 2:

prettyprint x = [[[[1], [2]], [[3], [4]]]]

This operation will output a tensor of shape [1, 1, 1, 4]:

prettyprint [[[[1, 2, 3, 4]]]]

Here, the input has a batch of 1 and each batch element has shape [2, 2, 1], the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 * block_size * block_size). The output element shape is [1, 1, 4].

For an input tensor with larger depth, here of shape [1, 2, 2, 3], e.g.

prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]

This operation, for block_size of 2, will return the following tensor of shape [1, 1, 1, 12]

prettyprint [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

Similarly, for the following input of shape [1 4 4 1], and a block size of 2:

prettyprint x = [[[[1], [2], [5], [6]], [[3], [4], [7], [8]], [[9], [10], [13], [14]], [[11], [12], [15], [16]]]]

the operator will return the following tensor of shape [1 2 2 4]:

prettyprint x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]]

Args: input: A Tensor. block_size: An int that is >= 2. The size of the spatial block. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def space_to_depth_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.space_to_depth_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.space_to_depth_layer

Return

Applicative

Origial documentation for Builder.space_to_depth_layer

def space_to_depth_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.space_to_depth, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.space_to_depth

def space_to_depth(input, block_size, name=None):

SpaceToDepth for tensors of type T.

Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. The attr block_size indicates the input block size and how the data is moved.

  • Non-overlapping blocks of size block_size x block size are rearranged into depth at each location.
  • The depth of the output tensor is input_depth * block_size * block_size.
  • The input tensor's height and width must be divisible by block_size.

That is, assuming the input is in the shape: [batch, height, width, depth], the shape of the output will be: [batch, height/block_size, width/block_size, depth*block_size*block_size]

This operation requires that the input tensor be of rank 4, and that block_size be >=1 and a divisor of both the input height and width.

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given this input of shape [1, 2, 2, 1], and block_size of 2:

prettyprint x = [[[[1], [2]], [[3], [4]]]]

This operation will output a tensor of shape [1, 1, 1, 4]:

prettyprint [[[[1, 2, 3, 4]]]]

Here, the input has a batch of 1 and each batch element has shape [2, 2, 1], the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 * block_size * block_size). The output element shape is [1, 1, 4].

For an input tensor with larger depth, here of shape [1, 2, 2, 3], e.g.

prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]

This operation, for block_size of 2, will return the following tensor of shape [1, 1, 1, 12]

prettyprint [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

Similarly, for the following input of shape [1 4 4 1], and a block size of 2:

prettyprint x = [[[[1], [2], [5], [6]], [[3], [4], [7], [8]], [[9], [10], [13], [14]], [[11], [12], [15], [16]]]]

the operator will return the following tensor of shape [1 2 2 4]:

prettyprint x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]]

Args: input: A Tensor. block_size: An int that is >= 2. The size of the spatial block. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_add(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_add, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_add

Return

Applicative

Origial documentation for Builder.sparse_add

def sparse_add(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_add to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_add

def sparse_add(a, b, thresh=0)

Adds two tensors, at least one of each is a SparseTensor.

If one SparseTensor and one Tensor are passed in, returns a Tensor. If both arguments are SparseTensors, this returns a SparseTensor. The order of arguments does not matter. Use vanilla tf.add() for adding two dense Tensors.

The indices of any input SparseTensor are assumed ordered in standard lexicographic order. If this is not the case, before this step run SparseReorder to restore index ordering.

If both arguments are sparse, we perform "clipping" as follows. By default, if two values sum to zero at some index, the output SparseTensor would still include that particular location in its index, storing a zero in the corresponding value slot. To override this, callers can specify thresh, indicating that if the sum has a magnitude strictly smaller than thresh, its corresponding value and index would then not be included. In particular, thresh == 0.0 (default) means everything is kept and actual thresholding happens only for a positive value.

For example, suppose the logical sum of two sparse operands is (densified):

[       2]
[.1     0]
[ 6   -.2]

Then,

- thresh == 0 (the default): all 5 index/value pairs will be returned.
- thresh == 0.11: only .1 and 0  will vanish, and the remaining three
    index/value pairs will be returned.
- thresh == 0.21: .1, 0, and -.2 will vanish.

Args: a: The first operand; SparseTensor or Tensor. b: The second operand; SparseTensor or Tensor. At least one operand must be sparse. thresh: A 0-D Tensor. The magnitude threshold that determines if an output value/index pair takes space. Its dtype should match that of the values if they are real; if the latter are complex64/complex128, then the dtype should be float32/float64, correspondingly.

Returns: A SparseTensor or a Tensor, representing the sum.

Raises: TypeError: If both a and b are Tensors. Use tf.add() instead.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_add_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_add_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_add_layer

Return

Applicative

Origial documentation for Builder.sparse_add_layer

def sparse_add_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_add, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_add

def sparse_add(a, b, thresh=0):

Adds two tensors, at least one of each is a SparseTensor.

If one SparseTensor and one Tensor are passed in, returns a Tensor. If both arguments are SparseTensors, this returns a SparseTensor. The order of arguments does not matter. Use vanilla tf.add() for adding two dense Tensors.

The indices of any input SparseTensor are assumed ordered in standard lexicographic order. If this is not the case, before this step run SparseReorder to restore index ordering.

If both arguments are sparse, we perform "clipping" as follows. By default, if two values sum to zero at some index, the output SparseTensor would still include that particular location in its index, storing a zero in the corresponding value slot. To override this, callers can specify thresh, indicating that if the sum has a magnitude strictly smaller than thresh, its corresponding value and index would then not be included. In particular, thresh == 0.0 (default) means everything is kept and actual thresholding happens only for a positive value.

For example, suppose the logical sum of two sparse operands is (densified):

[       2]
[.1     0]
[ 6   -.2]

Then,

- thresh == 0 (the default): all 5 index/value pairs will be returned.
- thresh == 0.11: only .1 and 0  will vanish, and the remaining three
    index/value pairs will be returned.
- thresh == 0.21: .1, 0, and -.2 will vanish.

Args: a: The first operand; SparseTensor or Tensor. b: The second operand; SparseTensor or Tensor. At least one operand must be sparse. thresh: A 0-D Tensor. The magnitude threshold that determines if an output value/index pair takes space. Its dtype should match that of the values if they are real; if the latter are complex64/complex128, then the dtype should be float32/float64, correspondingly.

Returns: A SparseTensor or a Tensor, representing the sum.

Raises: TypeError: If both a and b are Tensors. Use tf.add() instead.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_concat(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_concat, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_concat

Return

Applicative

Origial documentation for Builder.sparse_concat

def sparse_concat(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_concat to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_concat

def sparse_concat(concat_dim, sp_inputs, name=None, expand_nonconcat_dim=False)

Concatenates a list of SparseTensor along the specified dimension.

Concatenation is with respect to the dense versions of each sparse input. It is assumed that each inputs is a SparseTensor whose elements are ordered along increasing dimension number.

If expand_nonconcat_dim is False, all inputs' shapes must match, except for the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are allowd to vary among all inputs.

The indices, values, and shapes lists must have the same length.

If expand_nonconcat_dim is False, then the output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension.

If expand_nonconcat_dim is True, then the output shape along the non-concat dimensions will be expand to be the largest among all inputs, and it is the sum of the inputs sizes along the concat dimension.

The output elements will be resorted to preserve the sort order along increasing dimension number.

This op runs in O(M log M) time, where M is the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension.

For example, if concat_dim = 1 and the inputs are

sp_inputs[0]: shape = [2, 3]
[0, 2]: "a"
[1, 0]: "b"
[1, 1]: "c"

sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"

then the output will be

shape = [2, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[1, 1]: "c"

Graphically this is equivalent to doing

[    a] concat [  d e  ] = [    a   d e  ]
[b c  ]        [       ]   [b c          ]

Another example, if 'concat_dim = 1' and the inputs are

sp_inputs[0]: shape = [3, 3]
[0, 2]: "a"
[1, 0]: "b"
[2, 1]: "c"

sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"

if expand_nonconcat_dim = False, this will result in an error. But if expand_nonconcat_dim = True, this will result in:

shape = [3, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[2, 1]: "c"

Graphically this is equivalent to doing

[    a] concat [  d e  ] = [    a   d e  ]
[b    ]        [       ]   [b            ]
[  c  ]                    [  c          ]

Args: concat_dim: Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor. sp_inputs: List of SparseTensor to concatenate. name: A name prefix for the returned tensors (optional). expand_nonconcat_dim: Whether to allow the expansion in the non-concat dimensions. Defaulted to False.

Returns: A SparseTensor with the concatenated output.

Raises: TypeError: If sp_inputs is not a list of SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_concat_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_concat_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_concat_layer

Return

Applicative

Origial documentation for Builder.sparse_concat_layer

def sparse_concat_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_concat, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_concat

def sparse_concat(concat_dim, sp_inputs, name=None, expand_nonconcat_dim=False):

Concatenates a list of SparseTensor along the specified dimension.

Concatenation is with respect to the dense versions of each sparse input. It is assumed that each inputs is a SparseTensor whose elements are ordered along increasing dimension number.

If expand_nonconcat_dim is False, all inputs' shapes must match, except for the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are allowd to vary among all inputs.

The indices, values, and shapes lists must have the same length.

If expand_nonconcat_dim is False, then the output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension.

If expand_nonconcat_dim is True, then the output shape along the non-concat dimensions will be expand to be the largest among all inputs, and it is the sum of the inputs sizes along the concat dimension.

The output elements will be resorted to preserve the sort order along increasing dimension number.

This op runs in O(M log M) time, where M is the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension.

For example, if concat_dim = 1 and the inputs are

sp_inputs[0]: shape = [2, 3]
[0, 2]: "a"
[1, 0]: "b"
[1, 1]: "c"

sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"

then the output will be

shape = [2, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[1, 1]: "c"

Graphically this is equivalent to doing

[    a] concat [  d e  ] = [    a   d e  ]
[b c  ]        [       ]   [b c          ]

Another example, if 'concat_dim = 1' and the inputs are

sp_inputs[0]: shape = [3, 3]
[0, 2]: "a"
[1, 0]: "b"
[2, 1]: "c"

sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"

if expand_nonconcat_dim = False, this will result in an error. But if expand_nonconcat_dim = True, this will result in:

shape = [3, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[2, 1]: "c"

Graphically this is equivalent to doing

[    a] concat [  d e  ] = [    a   d e  ]
[b    ]        [       ]   [b            ]
[  c  ]                    [  c          ]

Args: concat_dim: Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor. sp_inputs: List of SparseTensor to concatenate. name: A name prefix for the returned tensors (optional). expand_nonconcat_dim: Whether to allow the expansion in the non-concat dimensions. Defaulted to False.

Returns: A SparseTensor with the concatenated output.

Raises: TypeError: If sp_inputs is not a list of SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_fill_empty_rows(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_fill_empty_rows, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_fill_empty_rows

Return

Applicative

Origial documentation for Builder.sparse_fill_empty_rows

def sparse_fill_empty_rows(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_fill_empty_rows to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_fill_empty_rows

def sparse_fill_empty_rows(sp_input, default_value, name=None)

Fills empty rows in the input 2-D SparseTensor with a default value.

This op adds entries with the specified default_value at index [row, 0] for any row in the input that does not already have a value.

For example, suppose sp_input has shape [5, 6] and non-empty values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

Rows 1 and 4 are empty, so the output will be of shape [5, 6] with values:

[0, 1]: a
[0, 3]: b
[1, 0]: default_value
[2, 0]: c
[3, 1]: d
[4, 0]: default_value

Note that the input may have empty columns at the end, with no effect on this op.

The output SparseTensor will be in row-major order and will have the same shape as the input.

This op also returns an indicator vector such that

empty_row_indicator[i] = True iff row i was an empty row.

Args: sp_input: A SparseTensor with shape [N, M]. default_value: The value to fill for empty rows, with the same type as sp_input. name: A name prefix for the returned tensors (optional)

Returns: sp_ordered_output: A SparseTensor with shape [N, M], and with all empty rows filled in with default_value. empty_row_indicator: A bool vector of length N indicating whether each input row was empty.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_fill_empty_rows_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_fill_empty_rows_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_fill_empty_rows_layer

Return

Applicative

Origial documentation for Builder.sparse_fill_empty_rows_layer

def sparse_fill_empty_rows_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_fill_empty_rows, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_fill_empty_rows

def sparse_fill_empty_rows(sp_input, default_value, name=None):

Fills empty rows in the input 2-D SparseTensor with a default value.

This op adds entries with the specified default_value at index [row, 0] for any row in the input that does not already have a value.

For example, suppose sp_input has shape [5, 6] and non-empty values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

Rows 1 and 4 are empty, so the output will be of shape [5, 6] with values:

[0, 1]: a
[0, 3]: b
[1, 0]: default_value
[2, 0]: c
[3, 1]: d
[4, 0]: default_value

Note that the input may have empty columns at the end, with no effect on this op.

The output SparseTensor will be in row-major order and will have the same shape as the input.

This op also returns an indicator vector such that

empty_row_indicator[i] = True iff row i was an empty row.

Args: sp_input: A SparseTensor with shape [N, M]. default_value: The value to fill for empty rows, with the same type as sp_input. name: A name prefix for the returned tensors (optional)

Returns: sp_ordered_output: A SparseTensor with shape [N, M], and with all empty rows filled in with default_value. empty_row_indicator: A bool vector of length N indicating whether each input row was empty.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_mask(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_mask, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_mask

Return

Applicative

Origial documentation for Builder.sparse_mask

def sparse_mask(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_mask to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_mask

def sparse_mask(a, mask_indices, name=None)

Masks elements of IndexedSlices.

Given an IndexedSlices instance a, returns another IndexedSlices that contains a subset of the slices of a. Only the slices at indices not specified in mask_indices are returned.

This is useful when you need to extract a subset of slices in an IndexedSlices object.

For example:

```python

a contains slices at indices [12, 26, 37, 45] from a large tensor

with shape [1000, 10]

a.indices => [12, 26, 37, 45] tf.shape(a.values) => [4, 10]

b will be the subset of a slices at its second and third indices, so

we want to mask its first and last indices (which are at absolute

indices 12, 45)

b = tf.sparse_mask(a, [12, 45])

b.indices => [26, 37] tf.shape(b.values) => [2, 10]

```

Args: * a: An IndexedSlices instance. * mask_indices: Indices of elements to mask. * name: A name for the operation (optional).

Returns: The masked IndexedSlices instance.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_mask_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_mask_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_mask_layer

Return

Applicative

Origial documentation for Builder.sparse_mask_layer

def sparse_mask_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_mask, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_mask

def sparse_mask(a, mask_indices, name=None):

Masks elements of IndexedSlices.

Given an IndexedSlices instance a, returns another IndexedSlices that contains a subset of the slices of a. Only the slices at indices not specified in mask_indices are returned.

This is useful when you need to extract a subset of slices in an IndexedSlices object.

For example:

```python

a contains slices at indices [12, 26, 37, 45] from a large tensor

with shape [1000, 10]

a.indices => [12, 26, 37, 45] tf.shape(a.values) => [4, 10]

b will be the subset of a slices at its second and third indices, so

we want to mask its first and last indices (which are at absolute

indices 12, 45)

b = tf.sparse_mask(a, [12, 45])

b.indices => [26, 37] tf.shape(b.values) => [2, 10]

```

Args: * a: An IndexedSlices instance. * mask_indices: Indices of elements to mask. * name: A name for the operation (optional).

Returns: The masked IndexedSlices instance.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_matmul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_matmul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_matmul_layer

Return

Applicative

Origial documentation for Builder.sparse_matmul_layer

def sparse_matmul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_matmul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_matmul

def _sparse_mat_mul(a, b, transpose_a=None, transpose_b=None, a_is_sparse=None, b_is_sparse=None, name=None):

Multiply matrix "a" by matrix "b".

The inputs must be two-dimensional matrices and the inner dimension of "a" must match the outer dimension of "b". This op is optimized for the case where at least one of "a" or "b" is sparse. The breakeven for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix.

Args: a: A Tensor. Must be one of the following types: float32, bfloat16. b: A Tensor. Must be one of the following types: float32, bfloat16. transpose_a: An optional bool. Defaults to False. transpose_b: An optional bool. Defaults to False. a_is_sparse: An optional bool. Defaults to False. b_is_sparse: An optional bool. Defaults to False. name: A name for the operation (optional).

Returns: A Tensor of type float32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_maximum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_maximum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_maximum

Return

Applicative

Origial documentation for Builder.sparse_maximum

def sparse_maximum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_maximum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_maximum

def sparse_maximum(sp_a, sp_b, name=None)

Returns the element-wise max of two SparseTensors.

Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example:

```python sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_maximum(sp_zero, sp_one).eval()

"res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]).

```

Args: sp_a: a SparseTensor operand whose dtype is real, and indices lexicographically ordered. sp_b: the other SparseTensor operand with the same requirements (and the same shape). name: optional name of the operation. Returns: output: the output SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_maximum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_maximum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_maximum_layer

Return

Applicative

Origial documentation for Builder.sparse_maximum_layer

def sparse_maximum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_maximum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_maximum

def sparse_maximum(sp_a, sp_b, name=None):

Returns the element-wise max of two SparseTensors.

Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example:

```python sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_maximum(sp_zero, sp_one).eval()

"res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]).

```

Args: sp_a: a SparseTensor operand whose dtype is real, and indices lexicographically ordered. sp_b: the other SparseTensor operand with the same requirements (and the same shape). name: optional name of the operation. Returns: output: the output SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_merge(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_merge, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_merge

Return

Applicative

Origial documentation for Builder.sparse_merge

def sparse_merge(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_merge to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_merge

def sparse_merge(sp_ids, sp_values, vocab_size, name=None, already_sorted=False)

Combines a batch of feature ids and values into a single SparseTensor.

The most common use case for this function occurs when feature ids and their corresponding values are stored in Example protos on disk. parse_example will return a batch of ids and a batch of values, and this function joins them into a single logical SparseTensor for use in functions such as sparse_tensor_dense_matmul, sparse_to_dense, etc.

The SparseTensor returned by this function has the following properties:

  • indices is equivalent to sp_ids.indices with the last dimension discarded and replaced with sp_ids.values.
  • values is simply sp_values.values.
  • If sp_ids.shape = [D0, D1, ..., Dn, K], then output.shape = [D0, D1, ..., Dn, vocab_size].

For example, consider the following feature vectors:

python vector1 = [-3, 0, 0, 0, 0, 0] vector2 = [ 0, 1, 0, 4, 1, 0] vector3 = [ 5, 0, 0, 9, 0, 0]

These might be stored sparsely in the following Example protos by storing only the feature ids (column number if the vectors are treated as a matrix) of the non-zero elements and the corresponding values:

python examples = [Example(features={ "ids": Feature(int64_list=Int64List(value=[0])), "values": Feature(float_list=FloatList(value=[-3]))}), Example(features={ "ids": Feature(int64_list=Int64List(value=[1, 4, 3])), "values": Feature(float_list=FloatList(value=[1, 1, 4]))}), Example(features={ "ids": Feature(int64_list=Int64List(value=[0, 3])), "values": Feature(float_list=FloatList(value=[5, 9]))})]

The result of calling parse_example on these examples will produce a dictionary with entries for "ids" and "values". Passing those two objects to this function along with vocab_size=6, will produce a SparseTensor that sparsely represents all three instances. Namely, the indices property will contain the coordinates of the non-zero entries in the feature matrix (the first dimension is the row number in the matrix, i.e., the index within the batch, and the second dimension is the column number, i.e., the feature id); values will contain the actual values. shape will be the shape of the original matrix, i.e., (3, 6). For our example above, the output will be equal to:

python SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]], values=[-3, 1, 4, 1, 5, 9], shape=[3, 6])

Args: sp_ids: A SparseTensor with values property of type int32 or int64. sp_values: ASparseTensor of any type. vocab_size: A scalar int64 Tensor (or Python int) containing the new size of the last dimension, all(0 <= sp_ids.values < vocab_size). name: A name prefix for the returned tensors (optional) already_sorted: A boolean to specify whether the per-batch values in sp_values are already sorted. If so skip sorting, False by default (optional).

Returns: A SparseTensor compactly representing a batch of feature ids and values, useful for passing to functions that expect such a SparseTensor.

Raises: TypeError: If sp_ids or sp_values are not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_merge_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_merge_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_merge_layer

Return

Applicative

Origial documentation for Builder.sparse_merge_layer

def sparse_merge_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_merge, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_merge

def sparse_merge(sp_ids, sp_values, vocab_size, name=None, already_sorted=False):

Combines a batch of feature ids and values into a single SparseTensor.

The most common use case for this function occurs when feature ids and their corresponding values are stored in Example protos on disk. parse_example will return a batch of ids and a batch of values, and this function joins them into a single logical SparseTensor for use in functions such as sparse_tensor_dense_matmul, sparse_to_dense, etc.

The SparseTensor returned by this function has the following properties:

  • indices is equivalent to sp_ids.indices with the last dimension discarded and replaced with sp_ids.values.
  • values is simply sp_values.values.
  • If sp_ids.shape = [D0, D1, ..., Dn, K], then output.shape = [D0, D1, ..., Dn, vocab_size].

For example, consider the following feature vectors:

python vector1 = [-3, 0, 0, 0, 0, 0] vector2 = [ 0, 1, 0, 4, 1, 0] vector3 = [ 5, 0, 0, 9, 0, 0]

These might be stored sparsely in the following Example protos by storing only the feature ids (column number if the vectors are treated as a matrix) of the non-zero elements and the corresponding values:

python examples = [Example(features={ "ids": Feature(int64_list=Int64List(value=[0])), "values": Feature(float_list=FloatList(value=[-3]))}), Example(features={ "ids": Feature(int64_list=Int64List(value=[1, 4, 3])), "values": Feature(float_list=FloatList(value=[1, 1, 4]))}), Example(features={ "ids": Feature(int64_list=Int64List(value=[0, 3])), "values": Feature(float_list=FloatList(value=[5, 9]))})]

The result of calling parse_example on these examples will produce a dictionary with entries for "ids" and "values". Passing those two objects to this function along with vocab_size=6, will produce a SparseTensor that sparsely represents all three instances. Namely, the indices property will contain the coordinates of the non-zero entries in the feature matrix (the first dimension is the row number in the matrix, i.e., the index within the batch, and the second dimension is the column number, i.e., the feature id); values will contain the actual values. shape will be the shape of the original matrix, i.e., (3, 6). For our example above, the output will be equal to:

python SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]], values=[-3, 1, 4, 1, 5, 9], shape=[3, 6])

Args: sp_ids: A SparseTensor with values property of type int32 or int64. sp_values: ASparseTensor of any type. vocab_size: A scalar int64 Tensor (or Python int) containing the new size of the last dimension, all(0 <= sp_ids.values < vocab_size). name: A name prefix for the returned tensors (optional) already_sorted: A boolean to specify whether the per-batch values in sp_values are already sorted. If so skip sorting, False by default (optional).

Returns: A SparseTensor compactly representing a batch of feature ids and values, useful for passing to functions that expect such a SparseTensor.

Raises: TypeError: If sp_ids or sp_values are not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_minimum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_minimum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_minimum

Return

Applicative

Origial documentation for Builder.sparse_minimum

def sparse_minimum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_minimum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_minimum

def sparse_minimum(sp_a, sp_b, name=None)

Returns the element-wise min of two SparseTensors.

Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example:

```python sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_minimum(sp_zero, sp_one).eval()

"res" should be equal to SparseTensor([[0], [1]], [0, 0], [7]).

```

Args: sp_a: a SparseTensor operand whose dtype is real, and indices lexicographically ordered. sp_b: the other SparseTensor operand with the same requirements (and the same shape). name: optional name of the operation. Returns: output: the output SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_minimum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_minimum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_minimum_layer

Return

Applicative

Origial documentation for Builder.sparse_minimum_layer

def sparse_minimum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_minimum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_minimum

def sparse_minimum(sp_a, sp_b, name=None):

Returns the element-wise min of two SparseTensors.

Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example:

```python sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_minimum(sp_zero, sp_one).eval()

"res" should be equal to SparseTensor([[0], [1]], [0, 0], [7]).

```

Args: sp_a: a SparseTensor operand whose dtype is real, and indices lexicographically ordered. sp_b: the other SparseTensor operand with the same requirements (and the same shape). name: optional name of the operation. Returns: output: the output SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_placeholder(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_placeholder, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_placeholder

Return

Applicative

Origial documentation for Builder.sparse_placeholder

def sparse_placeholder(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_placeholder to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_placeholder

def sparse_placeholder(dtype, shape=None, name=None)

Inserts a placeholder for a sparse tensor that will be always fed.

Important: This sparse tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run().

For example:

```python x = tf.sparse_placeholder(tf.float32) y = tf.sparse_reduce_sum(x)

with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed.

indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) values = np.array([1.0, 2.0], dtype=np.float32) shape = np.array([7, 9, 2], dtype=np.int64) print(sess.run(y, feed_dict={ x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed. print(sess.run(y, feed_dict={ x: (indices, values, shape)})) # Will succeed.

sp = tf.SparseTensor(indices=indices, values=values, shape=shape) sp_value = sp.eval(session) print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. ```

Args: dtype: The type of values elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a sparse tensor of any shape. name: A name for prefixing the operations (optional).

Returns: A SparseTensor that may be used as a handle for feeding a value, but not evaluated directly.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_placeholder_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_placeholder_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_placeholder_layer

Return

Applicative

Origial documentation for Builder.sparse_placeholder_layer

def sparse_placeholder_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_placeholder, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_placeholder

def sparse_placeholder(dtype, shape=None, name=None):

Inserts a placeholder for a sparse tensor that will be always fed.

Important: This sparse tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run().

For example:

```python x = tf.sparse_placeholder(tf.float32) y = tf.sparse_reduce_sum(x)

with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed.

indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) values = np.array([1.0, 2.0], dtype=np.float32) shape = np.array([7, 9, 2], dtype=np.int64) print(sess.run(y, feed_dict={ x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed. print(sess.run(y, feed_dict={ x: (indices, values, shape)})) # Will succeed.

sp = tf.SparseTensor(indices=indices, values=values, shape=shape) sp_value = sp.eval(session) print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. ```

Args: dtype: The type of values elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a sparse tensor of any shape. name: A name for prefixing the operations (optional).

Returns: A SparseTensor that may be used as a handle for feeding a value, but not evaluated directly.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reduce_sum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reduce_sum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reduce_sum

Return

Applicative

Origial documentation for Builder.sparse_reduce_sum

def sparse_reduce_sum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_reduce_sum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_reduce_sum

def sparse_reduce_sum(sp_input, reduction_axes=None, keep_dims=False)

Computes the sum of elements across dimensions of a SparseTensor.

This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In particular, this Op also returns a dense Tensor instead of a sparse one.

Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, similar to the indexing rules in Python.

For example:

```python

'x' represents [[1, ?, 1]

[?, 1, ?]]

where ? is implicitly-zero.

tf.sparse_reduce_sum(x) ==> 3 tf.sparse_reduce_sum(x, 0) ==> [1, 1, 1] tf.sparse_reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis. tf.sparse_reduce_sum(x, 1, keep_dims=True) ==> [[2], [1]] tf.sparse_reduce_sum(x, [0, 1]) ==> 3 ```

Args: sp_input: The SparseTensor to reduce. Should have numeric type. reduction_axes: The dimensions to reduce; list or scalar. If None (the default), reduces all dimensions. keep_dims: If true, retain reduced dimensions with length 1.

Returns: The reduced Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reduce_sum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reduce_sum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reduce_sum_layer

Return

Applicative

Origial documentation for Builder.sparse_reduce_sum_layer

def sparse_reduce_sum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_reduce_sum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_reduce_sum

def sparse_reduce_sum(sp_input, reduction_axes=None, keep_dims=False):

Computes the sum of elements across dimensions of a SparseTensor.

This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In particular, this Op also returns a dense Tensor instead of a sparse one.

Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, similar to the indexing rules in Python.

For example:

```python

'x' represents [[1, ?, 1]

[?, 1, ?]]

where ? is implicitly-zero.

tf.sparse_reduce_sum(x) ==> 3 tf.sparse_reduce_sum(x, 0) ==> [1, 1, 1] tf.sparse_reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis. tf.sparse_reduce_sum(x, 1, keep_dims=True) ==> [[2], [1]] tf.sparse_reduce_sum(x, [0, 1]) ==> 3 ```

Args: sp_input: The SparseTensor to reduce. Should have numeric type. reduction_axes: The dimensions to reduce; list or scalar. If None (the default), reduces all dimensions. keep_dims: If true, retain reduced dimensions with length 1.

Returns: The reduced Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reduce_sum_sparse(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reduce_sum_sparse, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reduce_sum_sparse

Return

Applicative

Origial documentation for Builder.sparse_reduce_sum_sparse

def sparse_reduce_sum_sparse(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_reduce_sum_sparse to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_reduce_sum_sparse

def sparse_reduce_sum_sparse(sp_input, reduction_axes=None, keep_dims=False)

Computes the sum of elements across dimensions of a SparseTensor.

This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a SparseTensor.

Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.

Args: sp_input: The SparseTensor to reduce. Should have numeric type. reduction_axes: The dimensions to reduce; list or scalar. If None (the default), reduces all dimensions. keep_dims: If true, retain reduced dimensions with length 1.

Returns: The reduced SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reduce_sum_sparse_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reduce_sum_sparse_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reduce_sum_sparse_layer

Return

Applicative

Origial documentation for Builder.sparse_reduce_sum_sparse_layer

def sparse_reduce_sum_sparse_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_reduce_sum_sparse, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_reduce_sum_sparse

def sparse_reduce_sum_sparse(sp_input, reduction_axes=None, keep_dims=False):

Computes the sum of elements across dimensions of a SparseTensor.

This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a SparseTensor.

Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.

Args: sp_input: The SparseTensor to reduce. Should have numeric type. reduction_axes: The dimensions to reduce; list or scalar. If None (the default), reduces all dimensions. keep_dims: If true, retain reduced dimensions with length 1.

Returns: The reduced SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reorder(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reorder, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reorder

Return

Applicative

Origial documentation for Builder.sparse_reorder

def sparse_reorder(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_reorder to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_reorder

def sparse_reorder(sp_input, name=None)

Reorders a SparseTensor into the canonical, row-major ordering.

Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values to add entries.

Reordering does not affect the shape of the SparseTensor.

For example, if sp_input has shape [4, 5] and indices / values:

[0, 3]: b
[0, 1]: a
[3, 1]: d
[2, 0]: c

then the output will be a SparseTensor of shape [4, 5] and indices / values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

Args: sp_input: The input SparseTensor. name: A name prefix for the returned tensors (optional)

Returns: A SparseTensor with the same shape and non-empty values, but in canonical ordering.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reorder_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reorder_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reorder_layer

Return

Applicative

Origial documentation for Builder.sparse_reorder_layer

def sparse_reorder_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_reorder, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_reorder

def sparse_reorder(sp_input, name=None):

Reorders a SparseTensor into the canonical, row-major ordering.

Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values to add entries.

Reordering does not affect the shape of the SparseTensor.

For example, if sp_input has shape [4, 5] and indices / values:

[0, 3]: b
[0, 1]: a
[3, 1]: d
[2, 0]: c

then the output will be a SparseTensor of shape [4, 5] and indices / values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

Args: sp_input: The input SparseTensor. name: A name prefix for the returned tensors (optional)

Returns: A SparseTensor with the same shape and non-empty values, but in canonical ordering.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reset_shape(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reset_shape, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reset_shape

Return

Applicative

Origial documentation for Builder.sparse_reset_shape

def sparse_reset_shape(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_reset_shape to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_reset_shape

def sparse_reset_shape(sp_input, new_shape=None)

Resets the shape of a SparseTensor with indices and values unchanged.

If new_shape is None, returns a copy of sp_input with its shape reset to the tight bounding box of sp_input.

If new_shape is provided, then it must be larger or equal in all dimensions compared to the shape of sp_input. When this condition is met, the returned SparseTensor will have its shape reset to new_shape and its indices and values unchanged from that of sp_input.

For example:

Consider a sp_input with shape [2, 3, 5]:

[0, 0, 1]: a
[0, 1, 0]: b
[0, 2, 2]: c
[1, 0, 3]: d
  • It is an error to set new_shape as [3, 7] since this represents a rank-2 tensor while sp_input is rank-3. This is either a ValueError during graph construction (if both shapes are known) or an OpError during run time.

  • Setting new_shape as [2, 3, 6] will be fine as this shape is larger or equal in every dimension compared to the original shape [2, 3, 5].

  • On the other hand, setting new_shape as [2, 3, 4] is also an error: The third dimension is smaller than the original shape [2, 3, 5] (and an InvalidArgumentError will be raised).

  • If new_shape is None, the returned SparseTensor will have a shape [2, 3, 4], which is the tight bounding box of sp_input.

Args: sp_input: The input SparseTensor. new_shape: None or a vector representing the new shape for the returned SparseTensor.

Returns: A SparseTensor indices and values unchanged from input_sp. Its shape is new_shape if that is set. Otherwise it is the tight bounding box of input_sp

Raises: TypeError: If sp_input is not a SparseTensor. ValueError: If new_shape represents a tensor with a different rank from that of sp_input (if shapes are known when graph is constructed). OpError: - If new_shape has dimension sizes that are too small. - If shapes are not known during graph construction time, and during run time it is found out that the ranks do not match.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reset_shape_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reset_shape_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reset_shape_layer

Return

Applicative

Origial documentation for Builder.sparse_reset_shape_layer

def sparse_reset_shape_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_reset_shape, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_reset_shape

def sparse_reset_shape(sp_input, new_shape=None):

Resets the shape of a SparseTensor with indices and values unchanged.

If new_shape is None, returns a copy of sp_input with its shape reset to the tight bounding box of sp_input.

If new_shape is provided, then it must be larger or equal in all dimensions compared to the shape of sp_input. When this condition is met, the returned SparseTensor will have its shape reset to new_shape and its indices and values unchanged from that of sp_input.

For example:

Consider a sp_input with shape [2, 3, 5]:

[0, 0, 1]: a
[0, 1, 0]: b
[0, 2, 2]: c
[1, 0, 3]: d
  • It is an error to set new_shape as [3, 7] since this represents a rank-2 tensor while sp_input is rank-3. This is either a ValueError during graph construction (if both shapes are known) or an OpError during run time.

  • Setting new_shape as [2, 3, 6] will be fine as this shape is larger or equal in every dimension compared to the original shape [2, 3, 5].

  • On the other hand, setting new_shape as [2, 3, 4] is also an error: The third dimension is smaller than the original shape [2, 3, 5] (and an InvalidArgumentError will be raised).

  • If new_shape is None, the returned SparseTensor will have a shape [2, 3, 4], which is the tight bounding box of sp_input.

Args: sp_input: The input SparseTensor. new_shape: None or a vector representing the new shape for the returned SparseTensor.

Returns: A SparseTensor indices and values unchanged from input_sp. Its shape is new_shape if that is set. Otherwise it is the tight bounding box of input_sp

Raises: TypeError: If sp_input is not a SparseTensor. ValueError: If new_shape represents a tensor with a different rank from that of sp_input (if shapes are known when graph is constructed). OpError: - If new_shape has dimension sizes that are too small. - If shapes are not known during graph construction time, and during run time it is found out that the ranks do not match.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reshape(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reshape, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reshape

Return

Applicative

Origial documentation for Builder.sparse_reshape

def sparse_reshape(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_reshape to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_reshape

def sparse_reshape(sp_input, shape, name=None)

Reshapes a SparseTensor to represent values in a new dense shape.

This operation has the same semantics as reshape on the represented dense tensor. The indices of non-empty values in sp_input are recomputed based on the new dense shape, and a new SparseTensor is returned containing the new indices and new shape. The order of non-empty values in sp_input is unchanged.

If one component of shape is the special value -1, the size of that dimension is computed so that the total dense size remains constant. At most one component of shape can be -1. The number of dense elements implied by shape must be the same as the number of dense elements originally represented by sp_input.

For example, if sp_input has shape [2, 3, 6] and indices / values:

[0, 0, 0]: a
[0, 0, 1]: b
[0, 1, 0]: c
[1, 0, 0]: d
[1, 2, 3]: e

and shape is [9, -1], then the output will be a SparseTensor of shape [9, 4] and indices / values:

[0, 0]: a
[0, 1]: b
[1, 2]: c
[4, 2]: d
[8, 1]: e

Args: sp_input: The input SparseTensor. shape: A 1-D (vector) int64 Tensor specifying the new dense shape of the represented SparseTensor. name: A name prefix for the returned tensors (optional)

Returns: A SparseTensor with the same non-empty values but with indices calculated by the new dense shape.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_reshape_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_reshape_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_reshape_layer

Return

Applicative

Origial documentation for Builder.sparse_reshape_layer

def sparse_reshape_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_reshape, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_reshape

def sparse_reshape(sp_input, shape, name=None):

Reshapes a SparseTensor to represent values in a new dense shape.

This operation has the same semantics as reshape on the represented dense tensor. The indices of non-empty values in sp_input are recomputed based on the new dense shape, and a new SparseTensor is returned containing the new indices and new shape. The order of non-empty values in sp_input is unchanged.

If one component of shape is the special value -1, the size of that dimension is computed so that the total dense size remains constant. At most one component of shape can be -1. The number of dense elements implied by shape must be the same as the number of dense elements originally represented by sp_input.

For example, if sp_input has shape [2, 3, 6] and indices / values:

[0, 0, 0]: a
[0, 0, 1]: b
[0, 1, 0]: c
[1, 0, 0]: d
[1, 2, 3]: e

and shape is [9, -1], then the output will be a SparseTensor of shape [9, 4] and indices / values:

[0, 0]: a
[0, 1]: b
[1, 2]: c
[4, 2]: d
[8, 1]: e

Args: sp_input: The input SparseTensor. shape: A 1-D (vector) int64 Tensor specifying the new dense shape of the represented SparseTensor. name: A name prefix for the returned tensors (optional)

Returns: A SparseTensor with the same non-empty values but with indices calculated by the new dense shape.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_retain(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_retain, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_retain

Return

Applicative

Origial documentation for Builder.sparse_retain

def sparse_retain(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_retain to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_retain

def sparse_retain(sp_input, to_retain)

Retains specified non-empty values within a SparseTensor.

For example, if sp_input has shape [4, 5] and 4 non-empty string values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

and to_retain = [True, False, False, True], then the output will be a SparseTensor of shape [4, 5] with 2 non-empty values:

[0, 1]: a
[3, 1]: d

Args: sp_input: The input SparseTensor with N non-empty elements. to_retain: A bool vector of length N with M true values.

Returns: A SparseTensor with the same shape as the input and M non-empty elements corresponding to the true positions in to_retain.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_retain_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_retain_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_retain_layer

Return

Applicative

Origial documentation for Builder.sparse_retain_layer

def sparse_retain_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_retain, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_retain

def sparse_retain(sp_input, to_retain):

Retains specified non-empty values within a SparseTensor.

For example, if sp_input has shape [4, 5] and 4 non-empty string values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

and to_retain = [True, False, False, True], then the output will be a SparseTensor of shape [4, 5] with 2 non-empty values:

[0, 1]: a
[3, 1]: d

Args: sp_input: The input SparseTensor with N non-empty elements. to_retain: A bool vector of length N with M true values.

Returns: A SparseTensor with the same shape as the input and M non-empty elements corresponding to the true positions in to_retain.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_segment_mean(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_segment_mean, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_segment_mean

Return

Applicative

Origial documentation for Builder.sparse_segment_mean

def sparse_segment_mean(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_segment_mean to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_segment_mean

def sparse_segment_mean(data, indices, segment_ids, name=None)

Computes the mean along sparse segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Like SegmentMean, but segment_ids can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices.

Args: data: A Tensor. Must be one of the following types: float32, float64. indices: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor. Has same rank as segment_ids. segment_ids: A Tensor of type int32. A 1-D tensor. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_segment_mean_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_segment_mean_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_segment_mean_layer

Return

Applicative

Origial documentation for Builder.sparse_segment_mean_layer

def sparse_segment_mean_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_segment_mean, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_segment_mean

def sparse_segment_mean(data, indices, segment_ids, name=None):

Computes the mean along sparse segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Like SegmentMean, but segment_ids can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices.

Args: data: A Tensor. Must be one of the following types: float32, float64. indices: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor. Has same rank as segment_ids. segment_ids: A Tensor of type int32. A 1-D tensor. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_segment_sqrt_n(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_segment_sqrt_n, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_segment_sqrt_n

Return

Applicative

Origial documentation for Builder.sparse_segment_sqrt_n

def sparse_segment_sqrt_n(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_segment_sqrt_n to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_segment_sqrt_n

def sparse_segment_sqrt_n(data, indices, segment_ids, name=None)

Computes the sum along sparse segments of a tensor divided by the sqrt of N.

N is the size of the segment being reduced.

Read the section on Segmentation for an explanation of segments.

Args: data: A Tensor. Must be one of the following types: float32, float64. indices: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor. Has same rank as segment_ids. segment_ids: A Tensor of type int32. A 1-D tensor. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_segment_sqrt_n_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_segment_sqrt_n_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_segment_sqrt_n_layer

Return

Applicative

Origial documentation for Builder.sparse_segment_sqrt_n_layer

def sparse_segment_sqrt_n_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_segment_sqrt_n, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_segment_sqrt_n

def sparse_segment_sqrt_n(data, indices, segment_ids, name=None):

Computes the sum along sparse segments of a tensor divided by the sqrt of N.

N is the size of the segment being reduced.

Read the section on Segmentation for an explanation of segments.

Args: data: A Tensor. Must be one of the following types: float32, float64. indices: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor. Has same rank as segment_ids. segment_ids: A Tensor of type int32. A 1-D tensor. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_segment_sum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_segment_sum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_segment_sum

Return

Applicative

Origial documentation for Builder.sparse_segment_sum

def sparse_segment_sum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_segment_sum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_segment_sum

def sparse_segment_sum(data, indices, segment_ids, name=None)

Computes the sum along sparse segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Like SegmentSum, but segment_ids can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices.

For example:

```prettyprint c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])

Select two rows, one segment.

tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) ==> [[0 0 0 0]]

Select two rows, two segment.

tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) ==> [[ 1 2 3 4] [-1 -2 -3 -4]]

Select all rows, two segments.

tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) ==> [[0 0 0 0] [5 6 7 8]]

Which is equivalent to:

tf.segment_sum(c, tf.constant([0, 0, 1])) ```

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. indices: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor. Has same rank as segment_ids. segment_ids: A Tensor of type int32. A 1-D tensor. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_segment_sum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_segment_sum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_segment_sum_layer

Return

Applicative

Origial documentation for Builder.sparse_segment_sum_layer

def sparse_segment_sum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_segment_sum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_segment_sum

def sparse_segment_sum(data, indices, segment_ids, name=None):

Computes the sum along sparse segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Like SegmentSum, but segment_ids can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices.

For example:

```prettyprint c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])

Select two rows, one segment.

tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) ==> [[0 0 0 0]]

Select two rows, two segment.

tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) ==> [[ 1 2 3 4] [-1 -2 -3 -4]]

Select all rows, two segments.

tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) ==> [[0 0 0 0] [5 6 7 8]]

Which is equivalent to:

tf.segment_sum(c, tf.constant([0, 0, 1])) ```

Args: data: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. indices: A Tensor. Must be one of the following types: int32, int64. A 1-D tensor. Has same rank as segment_ids. segment_ids: A Tensor of type int32. A 1-D tensor. Values should be sorted and can be repeated. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for dimension 0 which has size k, the number of segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_softmax(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_softmax, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_softmax

Return

Applicative

Origial documentation for Builder.sparse_softmax

def sparse_softmax(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_softmax to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_softmax

def sparse_softmax(sp_input, name=None)

Applies softmax to a batched N-D SparseTensor.

The inputs represent an N-D SparseTensor with logical shape [..., B, C] (where N >= 2), and with indices sorted in the canonical lexicographic order.

This op is equivalent to applying the normal tf.nn.softmax() to each innermost logical submatrix with shape [B, C], but with the catch that the implicitly zero elements do not participate. Specifically, the algorithm is equivalent to:

(1) Applies tf.nn.softmax() to a densified view of each innermost submatrix with shape [B, C], along the size-C dimension; (2) Masks out the original implicitly-zero locations; (3) Renormalizes the remaining elements.

Hence, the SparseTensor result has exactly the same non-zero indices and shape.

Example:

```python

First batch:

[? e.]

[1. ? ]

Second batch:

[e ? ]

[e e ]

shape = [2, 2, 2] # 3-D SparseTensor values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) indices = np.vstack(np.where(values)).astype(np.int64).T

result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape))

...returning a 3-D SparseTensor, equivalent to:

[? 1.] [1 ?]

[1. ? ] and [.5 .5]

where ? means implicitly zero.

```

Args: sp_input: N-D SparseTensor, where N >= 2. name: optional name of the operation. Returns: output: N-D SparseTensor representing the results.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_softmax_cross_entropy_with_logits(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_softmax_cross_entropy_with_logits, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_softmax_cross_entropy_with_logits

Return

Applicative

Origial documentation for Builder.sparse_softmax_cross_entropy_with_logits

def sparse_softmax_cross_entropy_with_logits(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.sparse_softmax_cross_entropy_with_logits to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.sparse_softmax_cross_entropy_with_logits

def sparse_softmax_cross_entropy_with_logits(logits, labels, name=None)

Computes sparse softmax cross entropy between logits and labels.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

NOTE: For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the labels vector must provide a single specific index for the true class for each row of logits (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see softmax_cross_entropy_with_logits.

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

A common use case is to have logits of shape [batch_size, num_classes] and labels of shape [batch_size]. But higher dimensions are supported.

Args:

logits: Unscaled log probabilities of rank r and shape [d_0, d_1, ..., d_{r-2}, num_classes] and dtype float32 or float64. labels: Tensor of shape [d_0, d_1, ..., d_{r-2}] and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding corresponding loss and gradient rows on GPU. name: A name for the operation (optional).

Returns: A Tensor of the same shape as labels and of the same type as logits with the softmax cross entropy loss.

Raises: ValueError: If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the labels minus one.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_softmax_cross_entropy_with_logits_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_softmax_cross_entropy_with_logits_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_softmax_cross_entropy_with_logits_layer

Return

Applicative

Origial documentation for Builder.sparse_softmax_cross_entropy_with_logits_layer

def sparse_softmax_cross_entropy_with_logits_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.sparse_softmax_cross_entropy_with_logits, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.sparse_softmax_cross_entropy_with_logits

def sparse_softmax_cross_entropy_with_logits(logits, labels, name=None):

Computes sparse softmax cross entropy between logits and labels.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

NOTE: For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the labels vector must provide a single specific index for the true class for each row of logits (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see softmax_cross_entropy_with_logits.

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

A common use case is to have logits of shape [batch_size, num_classes] and labels of shape [batch_size]. But higher dimensions are supported.

Args:

logits: Unscaled log probabilities of rank r and shape [d_0, d_1, ..., d_{r-2}, num_classes] and dtype float32 or float64. labels: Tensor of shape [d_0, d_1, ..., d_{r-2}] and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding corresponding loss and gradient rows on GPU. name: A name for the operation (optional).

Returns: A Tensor of the same shape as labels and of the same type as logits with the softmax cross entropy loss.

Raises: ValueError: If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the labels minus one.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_softmax_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_softmax_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_softmax_layer

Return

Applicative

Origial documentation for Builder.sparse_softmax_layer

def sparse_softmax_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_softmax, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_softmax

def sparse_softmax(sp_input, name=None):

Applies softmax to a batched N-D SparseTensor.

The inputs represent an N-D SparseTensor with logical shape [..., B, C] (where N >= 2), and with indices sorted in the canonical lexicographic order.

This op is equivalent to applying the normal tf.nn.softmax() to each innermost logical submatrix with shape [B, C], but with the catch that the implicitly zero elements do not participate. Specifically, the algorithm is equivalent to:

(1) Applies tf.nn.softmax() to a densified view of each innermost submatrix with shape [B, C], along the size-C dimension; (2) Masks out the original implicitly-zero locations; (3) Renormalizes the remaining elements.

Hence, the SparseTensor result has exactly the same non-zero indices and shape.

Example:

```python

First batch:

[? e.]

[1. ? ]

Second batch:

[e ? ]

[e e ]

shape = [2, 2, 2] # 3-D SparseTensor values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) indices = np.vstack(np.where(values)).astype(np.int64).T

result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape))

...returning a 3-D SparseTensor, equivalent to:

[? 1.] [1 ?]

[1. ? ] and [.5 .5]

where ? means implicitly zero.

```

Args: sp_input: N-D SparseTensor, where N >= 2. name: optional name of the operation. Returns: output: N-D SparseTensor representing the results.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_split(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_split, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_split

Return

Applicative

Origial documentation for Builder.sparse_split

def sparse_split(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_split to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_split

def sparse_split(split_dim, num_split, sp_input, name=None)

Split a SparseTensor into num_split tensors along split_dim.

If the sp_input.shape[split_dim] is not an integer multiple of num_split each slice starting from 0:shape[split_dim] % num_split gets extra one dimension. For example, if split_dim = 1 and num_split = 2 and the input is:

input_tensor = shape = [2, 7]
[    a   d e  ]
[b c          ]

Graphically the output tensors are:

output_tensor[0] =
[    a ]
[b c   ]

output_tensor[1] =
[ d e  ]
[      ]

Args: split_dim: A 0-D int32 Tensor. The dimension along which to split. num_split: A Python integer. The number of ways to split. sp_input: The SparseTensor to split. name: A name for the operation (optional).

Returns: num_split SparseTensor objects resulting from splitting value.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_split_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_split_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_split_layer

Return

Applicative

Origial documentation for Builder.sparse_split_layer

def sparse_split_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_split, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_split

def sparse_split(split_dim, num_split, sp_input, name=None):

Split a SparseTensor into num_split tensors along split_dim.

If the sp_input.shape[split_dim] is not an integer multiple of num_split each slice starting from 0:shape[split_dim] % num_split gets extra one dimension. For example, if split_dim = 1 and num_split = 2 and the input is:

input_tensor = shape = [2, 7]
[    a   d e  ]
[b c          ]

Graphically the output tensors are:

output_tensor[0] =
[    a ]
[b c   ]

output_tensor[1] =
[ d e  ]
[      ]

Args: split_dim: A 0-D int32 Tensor. The dimension along which to split. num_split: A Python integer. The number of ways to split. sp_input: The SparseTensor to split. name: A name for the operation (optional).

Returns: num_split SparseTensor objects resulting from splitting value.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_tensor_dense_matmul(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_tensor_dense_matmul, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_tensor_dense_matmul

Return

Applicative

Origial documentation for Builder.sparse_tensor_dense_matmul

def sparse_tensor_dense_matmul(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_tensor_dense_matmul to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_tensor_dense_matmul

def sparse_tensor_dense_matmul(sp_a, b, adjoint_a=False, adjoint_b=False, name=None)

Multiply SparseTensor (of rank 2) "A" by dense matrix "B".

No validity checking is performed on the indices of A. However, the following input format is recommended for optimal behavior:

if adjoint_a == false: A should be sorted in lexicographically increasing order. Use sparse_reorder if you're not sure. if adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).

Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True):

There are a number of questions to ask in the decision process, including:

  • Will the SparseTensor A fit in memory if densified?
  • Is the column count of the product large (>> 1)?
  • Is the density of A larger than approximately 15%?

If the answer to several of these questions is yes, consider converting the SparseTensor to a dense one and using tf.matmul with sp_a=True.

This operation tends to perform well when A is more sparse, if the column size of the product is small (e.g. matrix-vector multiplication), if sp_a.shape takes on large values.

Below is a rough speed comparison between sparse_tensor_dense_matmul, labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of the comparison, the time spent converting from a SparseTensor to a dense Tensor is not included, so it is overly conservative with respect to the time ratio.

Benchmark system: CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB GPU: NVidia Tesla k40c

Compiled with: -c opt --config=cuda --copt=-mavx

```tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks A sparse [m, k] with % nonzero values between 1% and 80% B dense [k, n]

% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) 0.01 1 True 100 100 0.000221166 0.00010154 0.459112 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 0.01 1 False 100 100 0.000208085 0.000107603 0.51711 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 0.01 10 True 100 100 0.000218522 0.000105537 0.482958 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 0.01 25 True 100 100 0.000207806 0.000105977 0.509981 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 0.01 25 True 1000 100 0.00038262 0.000141583 0.370035 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 0.01 25 False 100 100 0.000209401 0.000104696 0.499979 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 0.2 10 True 100 100 0.000211692 0.000109903 0.519165 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 0.2 10 False 100 100 0.000215727 0.000110502 0.512231 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 0.2 25 True 100 100 0.000218705 0.000129913 0.594009 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 0.2 25 False 100 100 0.000221494 0.0001306 0.589632 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 0.5 1 False 100 100 0.000224196 0.000101423 0.452386 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 0.5 10 True 100 100 0.000222125 0.000112308 0.505608 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 0.5 10 False 100 100 0.000232083 0.000114978 0.495418 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 0.5 25 True 100 100 0.00023429 0.000151703 0.647501 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 0.5 25 False 100 100 0.000228981 0.000155334 0.678371 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 0.8 1 True 100 100 0.000222037 0.000105301 0.47425 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 0.8 1 False 100 100 0.000214079 0.000107486 0.502085 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 0.8 10 True 100 100 0.000229159 0.00011825 0.516017 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 0.8 10 False 100 100 0.000230783 0.000124958 0.541452 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 0.8 25 True 100 100 0.000233496 0.000175241 0.75051 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 0.8 25 False 100 100 0.000240243 0.000175047 0.728625 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 ```

Args: sp_a: SparseTensor A, of rank 2. b: A dense Matrix with the same dtype as sp_a. adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A). adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B). name: A name prefix for the returned tensors (optional)

Returns: A dense matrix (pseudo-code in dense np.matrix notation): A = A.H if adjoint_a else A B = B.H if adjoint_b else B return A*B

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_tensor_dense_matmul_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_tensor_dense_matmul_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_tensor_dense_matmul_layer

Return

Applicative

Origial documentation for Builder.sparse_tensor_dense_matmul_layer

def sparse_tensor_dense_matmul_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_tensor_dense_matmul, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_tensor_dense_matmul

def sparse_tensor_dense_matmul(sp_a, b, adjoint_a=False, adjoint_b=False, name=None):

Multiply SparseTensor (of rank 2) "A" by dense matrix "B".

No validity checking is performed on the indices of A. However, the following input format is recommended for optimal behavior:

if adjoint_a == false: A should be sorted in lexicographically increasing order. Use sparse_reorder if you're not sure. if adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).

Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True):

There are a number of questions to ask in the decision process, including:

  • Will the SparseTensor A fit in memory if densified?
  • Is the column count of the product large (>> 1)?
  • Is the density of A larger than approximately 15%?

If the answer to several of these questions is yes, consider converting the SparseTensor to a dense one and using tf.matmul with sp_a=True.

This operation tends to perform well when A is more sparse, if the column size of the product is small (e.g. matrix-vector multiplication), if sp_a.shape takes on large values.

Below is a rough speed comparison between sparse_tensor_dense_matmul, labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of the comparison, the time spent converting from a SparseTensor to a dense Tensor is not included, so it is overly conservative with respect to the time ratio.

Benchmark system: CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB GPU: NVidia Tesla k40c

Compiled with: -c opt --config=cuda --copt=-mavx

```tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks A sparse [m, k] with % nonzero values between 1% and 80% B dense [k, n]

% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) 0.01 1 True 100 100 0.000221166 0.00010154 0.459112 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 0.01 1 False 100 100 0.000208085 0.000107603 0.51711 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 0.01 10 True 100 100 0.000218522 0.000105537 0.482958 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 0.01 25 True 100 100 0.000207806 0.000105977 0.509981 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 0.01 25 True 1000 100 0.00038262 0.000141583 0.370035 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 0.01 25 False 100 100 0.000209401 0.000104696 0.499979 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 0.2 10 True 100 100 0.000211692 0.000109903 0.519165 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 0.2 10 False 100 100 0.000215727 0.000110502 0.512231 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 0.2 25 True 100 100 0.000218705 0.000129913 0.594009 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 0.2 25 False 100 100 0.000221494 0.0001306 0.589632 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 0.5 1 False 100 100 0.000224196 0.000101423 0.452386 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 0.5 10 True 100 100 0.000222125 0.000112308 0.505608 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 0.5 10 False 100 100 0.000232083 0.000114978 0.495418 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 0.5 25 True 100 100 0.00023429 0.000151703 0.647501 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 0.5 25 False 100 100 0.000228981 0.000155334 0.678371 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 0.8 1 True 100 100 0.000222037 0.000105301 0.47425 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 0.8 1 False 100 100 0.000214079 0.000107486 0.502085 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 0.8 10 True 100 100 0.000229159 0.00011825 0.516017 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 0.8 10 False 100 100 0.000230783 0.000124958 0.541452 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 0.8 25 True 100 100 0.000233496 0.000175241 0.75051 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 0.8 25 False 100 100 0.000240243 0.000175047 0.728625 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 ```

Args: sp_a: SparseTensor A, of rank 2. b: A dense Matrix with the same dtype as sp_a. adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A). adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B). name: A name prefix for the returned tensors (optional)

Returns: A dense matrix (pseudo-code in dense np.matrix notation): A = A.H if adjoint_a else A B = B.H if adjoint_b else B return A*B

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_tensor_to_dense(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_tensor_to_dense, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_tensor_to_dense

Return

Applicative

Origial documentation for Builder.sparse_tensor_to_dense

def sparse_tensor_to_dense(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_tensor_to_dense to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_tensor_to_dense

def sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None)

Converts a SparseTensor into a dense tensor.

This op is a convenience wrapper around sparse_to_dense for SparseTensors.

For example, if sp_input has shape [3, 5] and non-empty string values:

[0, 1]: a
[0, 3]: b
[2, 0]: c

and default_value is x, then the output will be a dense [3, 5] string tensor with values:

[[x a x b x]
 [x x x x x]
 [c x x x x]]

Indices must be without repeats. This is only tested if validate_indices is True.

Args: sp_input: The input SparseTensor. default_value: Scalar value to set for indices not specified in sp_input. Defaults to zero. validate_indices: A boolean value. If True, indices are checked to make sure they are sorted in lexicographic order and that there are no repeats. name: A name prefix for the returned tensors (optional).

Returns: A dense tensor with shape sp_input.shape and values specified by the non-empty values in sp_input. Indices not in sp_input are assigned default_value.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_tensor_to_dense_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_tensor_to_dense_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_tensor_to_dense_layer

Return

Applicative

Origial documentation for Builder.sparse_tensor_to_dense_layer

def sparse_tensor_to_dense_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_tensor_to_dense, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_tensor_to_dense

def sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None):

Converts a SparseTensor into a dense tensor.

This op is a convenience wrapper around sparse_to_dense for SparseTensors.

For example, if sp_input has shape [3, 5] and non-empty string values:

[0, 1]: a
[0, 3]: b
[2, 0]: c

and default_value is x, then the output will be a dense [3, 5] string tensor with values:

[[x a x b x]
 [x x x x x]
 [c x x x x]]

Indices must be without repeats. This is only tested if validate_indices is True.

Args: sp_input: The input SparseTensor. default_value: Scalar value to set for indices not specified in sp_input. Defaults to zero. validate_indices: A boolean value. If True, indices are checked to make sure they are sorted in lexicographic order and that there are no repeats. name: A name prefix for the returned tensors (optional).

Returns: A dense tensor with shape sp_input.shape and values specified by the non-empty values in sp_input. Indices not in sp_input are assigned default_value.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_to_dense(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_to_dense, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_to_dense

Return

Applicative

Origial documentation for Builder.sparse_to_dense

def sparse_to_dense(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_to_dense to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_to_dense

def sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None)

Converts a sparse representation into a dense tensor.

Builds an array dense with shape output_shape such that

```python

If sparse_indices is scalar

dense[i] = (i == sparse_indices ? sparse_values : default_value)

If sparse_indices is a vector, then for each i

dense[sparse_indices[i]] = sparse_values[i]

If sparse_indices is an n by d matrix, then for each i in [0, n)

dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] ```

All other values in dense are set to default_value. If sparse_values is a scalar, all sparse indices are set to this single value.

Indices should be sorted in lexicographic order, and indices must not contain any repeats. If validate_indices is True, these properties are checked during execution.

Args: sparse_indices: A 0-D, 1-D, or 2-D Tensor of type int32 or int64. sparse_indices[i] contains the complete index where sparse_values[i] will be placed. output_shape: A 1-D Tensor of the same type as sparse_indices. Shape of the dense output tensor. sparse_values: A 0-D or 1-D Tensor. Values corresponding to each row of sparse_indices, or a scalar value to be used for all sparse indices. default_value: A 0-D Tensor of the same type as sparse_values. Value to set for indices not specified in sparse_indices. Defaults to zero. validate_indices: A boolean value. If True, indices are checked to make sure they are sorted in lexicographic order and that there are no repeats. name: A name for the operation (optional).

Returns: Dense Tensor of shape output_shape. Has the same type as sparse_values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_to_dense_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_to_dense_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_to_dense_layer

Return

Applicative

Origial documentation for Builder.sparse_to_dense_layer

def sparse_to_dense_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_to_dense, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_to_dense

def sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None):

Converts a sparse representation into a dense tensor.

Builds an array dense with shape output_shape such that

```python

If sparse_indices is scalar

dense[i] = (i == sparse_indices ? sparse_values : default_value)

If sparse_indices is a vector, then for each i

dense[sparse_indices[i]] = sparse_values[i]

If sparse_indices is an n by d matrix, then for each i in [0, n)

dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] ```

All other values in dense are set to default_value. If sparse_values is a scalar, all sparse indices are set to this single value.

Indices should be sorted in lexicographic order, and indices must not contain any repeats. If validate_indices is True, these properties are checked during execution.

Args: sparse_indices: A 0-D, 1-D, or 2-D Tensor of type int32 or int64. sparse_indices[i] contains the complete index where sparse_values[i] will be placed. output_shape: A 1-D Tensor of the same type as sparse_indices. Shape of the dense output tensor. sparse_values: A 0-D or 1-D Tensor. Values corresponding to each row of sparse_indices, or a scalar value to be used for all sparse indices. default_value: A 0-D Tensor of the same type as sparse_values. Value to set for indices not specified in sparse_indices. Defaults to zero. validate_indices: A boolean value. If True, indices are checked to make sure they are sorted in lexicographic order and that there are no repeats. name: A name for the operation (optional).

Returns: Dense Tensor of shape output_shape. Has the same type as sparse_values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_to_indicator(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_to_indicator, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_to_indicator

Return

Applicative

Origial documentation for Builder.sparse_to_indicator

def sparse_to_indicator(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_to_indicator to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_to_indicator

def sparse_to_indicator(sp_input, vocab_size, name=None)

Converts a SparseTensor of ids into a dense bool indicator tensor.

The last dimension of sp_input.indices is discarded and replaced with the values of sp_input. If sp_input.shape = [D0, D1, ..., Dn, K], then output.shape = [D0, D1, ..., Dn, vocab_size], where

output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True

and False elsewhere in output.

For example, if sp_input.shape = [2, 3, 4] with non-empty values:

[0, 0, 0]: 0
[0, 1, 0]: 10
[1, 0, 3]: 103
[1, 1, 2]: 150
[1, 1, 3]: 149
[1, 1, 4]: 150
[1, 2, 1]: 121

and vocab_size = 200, then the output will be a [2, 3, 200] dense bool tensor with False everywhere except at positions

(0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),
(1, 2, 121).

Note that repeats are allowed in the input SparseTensor. This op is useful for converting SparseTensors into dense formats for compatibility with ops that expect dense tensors.

The input SparseTensor must be in row-major order.

Args: sp_input: A SparseTensor with values property of type int32 or int64. vocab_size: A scalar int64 Tensor (or Python int) containing the new size of the last dimension, all(0 <= sp_input.values < vocab_size). name: A name prefix for the returned tensors (optional)

Returns: A dense bool indicator tensor representing the indices with specified value.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_to_indicator_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_to_indicator_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_to_indicator_layer

Return

Applicative

Origial documentation for Builder.sparse_to_indicator_layer

def sparse_to_indicator_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_to_indicator, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_to_indicator

def sparse_to_indicator(sp_input, vocab_size, name=None):

Converts a SparseTensor of ids into a dense bool indicator tensor.

The last dimension of sp_input.indices is discarded and replaced with the values of sp_input. If sp_input.shape = [D0, D1, ..., Dn, K], then output.shape = [D0, D1, ..., Dn, vocab_size], where

output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True

and False elsewhere in output.

For example, if sp_input.shape = [2, 3, 4] with non-empty values:

[0, 0, 0]: 0
[0, 1, 0]: 10
[1, 0, 3]: 103
[1, 1, 2]: 150
[1, 1, 3]: 149
[1, 1, 4]: 150
[1, 2, 1]: 121

and vocab_size = 200, then the output will be a [2, 3, 200] dense bool tensor with False everywhere except at positions

(0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),
(1, 2, 121).

Note that repeats are allowed in the input SparseTensor. This op is useful for converting SparseTensors into dense formats for compatibility with ops that expect dense tensors.

The input SparseTensor must be in row-major order.

Args: sp_input: A SparseTensor with values property of type int32 or int64. vocab_size: A scalar int64 Tensor (or Python int) containing the new size of the last dimension, all(0 <= sp_input.values < vocab_size). name: A name prefix for the returned tensors (optional)

Returns: A dense bool indicator tensor representing the indices with specified value.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_transpose(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_transpose, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_transpose

Return

Applicative

Origial documentation for Builder.sparse_transpose

def sparse_transpose(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sparse_transpose to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sparse_transpose

def sparse_transpose(sp_input, perm=None, name=None)

Transposes a SparseTensor

The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

For example, if sp_input has shape [4, 5] and indices / values:

[0, 3]: b
[0, 1]: a
[3, 1]: d
[2, 0]: c

then the output will be a SparseTensor of shape [5, 4] and indices / values:

[0, 2]: c
[1, 0]: a
[1, 3]: d
[3, 0]: b

Args: sp_input: The input SparseTensor. perm: A permutation of the dimensions of sp_input. name: A name prefix for the returned tensors (optional) Returns: A transposed SparseTensor.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sparse_transpose_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sparse_transpose_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sparse_transpose_layer

Return

Applicative

Origial documentation for Builder.sparse_transpose_layer

def sparse_transpose_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sparse_transpose, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sparse_transpose

def sparse_transpose(sp_input, perm=None, name=None):

Transposes a SparseTensor

The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

For example, if sp_input has shape [4, 5] and indices / values:

[0, 3]: b
[0, 1]: a
[3, 1]: d
[2, 0]: c

then the output will be a SparseTensor of shape [5, 4] and indices / values:

[0, 2]: c
[1, 0]: a
[1, 3]: d
[3, 0]: b

Args: sp_input: The input SparseTensor. perm: A permutation of the dimensions of sp_input. name: A name prefix for the returned tensors (optional) Returns: A transposed SparseTensor.

Raises: TypeError: If sp_input is not a SparseTensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def split(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.split, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.split

Return

Applicative

Origial documentation for Builder.split

def split(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.split to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.split

def split(split_dim, num_split, value, name="split")

Splits a tensor into num_split tensors along one dimension.

Splits value along dimension split_dim into num_split smaller tensors. Requires that num_split evenly divide value.shape[split_dim].

For example:

```python

'value' is a tensor with shape [5, 30]

Split 'value' into 3 tensors along dimension 1

split0, split1, split2 = tf.split(1, 3, value) tf.shape(split0) ==> [5, 10] ```

Note: If you are splitting along an axis by the length of that axis, consider using unpack, e.g.

python num_items = t.get_shape()[axis].value [tf.squeeze(s, [axis]) for s in tf.split(axis, num_items, t)]

can be rewritten as

python tf.unpack(t, axis=axis)

Args: split_dim: A 0-D int32 Tensor. The dimension along which to split. Must be in the range [0, rank(value)). num_split: A Python integer. The number of ways to split. value: The Tensor to split. name: A name for the operation (optional).

Returns: num_split Tensor objects resulting from splitting value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def split_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.split_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.split_layer

Return

Applicative

Origial documentation for Builder.split_layer

def split_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.split, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.split

def split(split_dim, num_split, value, name="split"):

Splits a tensor into num_split tensors along one dimension.

Splits value along dimension split_dim into num_split smaller tensors. Requires that num_split evenly divide value.shape[split_dim].

For example:

```python

'value' is a tensor with shape [5, 30]

Split 'value' into 3 tensors along dimension 1

split0, split1, split2 = tf.split(1, 3, value) tf.shape(split0) ==> [5, 10] ```

Note: If you are splitting along an axis by the length of that axis, consider using unpack, e.g.

python num_items = t.get_shape()[axis].value [tf.squeeze(s, [axis]) for s in tf.split(axis, num_items, t)]

can be rewritten as

python tf.unpack(t, axis=axis)

Args: split_dim: A 0-D int32 Tensor. The dimension along which to split. Must be in the range [0, rank(value)). num_split: A Python integer. The number of ways to split. value: The Tensor to split. name: A name for the operation (optional).

Returns: num_split Tensor objects resulting from splitting value.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sqrt(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sqrt, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sqrt

Return

Applicative

Origial documentation for Builder.sqrt

def sqrt(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sqrt to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sqrt

def sqrt(x, name=None)

Computes square root of x element-wise.

I.e., (y = \sqrt{x} = x^{1/2}).

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sqrt_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sqrt_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sqrt_layer

Return

Applicative

Origial documentation for Builder.sqrt_layer

def sqrt_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sqrt, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sqrt

def sqrt(x, name=None):

Computes square root of x element-wise.

I.e., (y = \sqrt{x} = x^{1/2}).

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor, respectively. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def square(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.square, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.square

Return

Applicative

Origial documentation for Builder.square

def square(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.square to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.square

def square(x, name=None)

Computes square of x element-wise.

I.e., (y = x * x = x^2).

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def square_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.square_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.square_layer

Return

Applicative

Origial documentation for Builder.square_layer

def square_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.square, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.square

def square(x, name=None):

Computes square of x element-wise.

I.e., (y = x * x = x^2).

Args: x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def squared_difference(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.squared_difference, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.squared_difference

Return

Applicative

Origial documentation for Builder.squared_difference

def squared_difference(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.squared_difference to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.squared_difference

def squared_difference(x, y, name=None)

Returns (x - y)(x - y) element-wise.

NOTE: SquaredDifference supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def squared_difference_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.squared_difference_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.squared_difference_layer

Return

Applicative

Origial documentation for Builder.squared_difference_layer

def squared_difference_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.squared_difference, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.squared_difference

def squared_difference(x, y, name=None):

Returns (x - y)(x - y) element-wise.

NOTE: SquaredDifference supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def squeeze(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.squeeze, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.squeeze

Return

Applicative

Origial documentation for Builder.squeeze

def squeeze(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.squeeze to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.squeeze

def squeeze(input, squeeze_dims=None, name=None)

Removes dimensions of size 1 from the shape of a tensor.

Given a tensor input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don't want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying squeeze_dims.

For example:

```prettyprint

't' is a tensor of shape [1, 2, 1, 3, 1, 1]

shape(squeeze(t)) ==> [2, 3] ```

Or, to remove specific size 1 dimensions:

```prettyprint

't' is a tensor of shape [1, 2, 1, 3, 1, 1]

shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] ```

Args: input: A Tensor. The input to squeeze. squeeze_dims: An optional list of ints. Defaults to []. If specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Contains the same data as input, but has one or more dimensions of size 1 removed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def squeeze_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.squeeze_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.squeeze_layer

Return

Applicative

Origial documentation for Builder.squeeze_layer

def squeeze_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.squeeze, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.squeeze

def squeeze(input, squeeze_dims=None, name=None):

Removes dimensions of size 1 from the shape of a tensor.

Given a tensor input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don't want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying squeeze_dims.

For example:

```prettyprint

't' is a tensor of shape [1, 2, 1, 3, 1, 1]

shape(squeeze(t)) ==> [2, 3] ```

Or, to remove specific size 1 dimensions:

```prettyprint

't' is a tensor of shape [1, 2, 1, 3, 1, 1]

shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] ```

Args: input: A Tensor. The input to squeeze. squeeze_dims: An optional list of ints. Defaults to []. If specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input. Contains the same data as input, but has one or more dimensions of size 1 removed.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def state_saving_rnn(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.state_saving_rnn, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.state_saving_rnn

Return

Applicative

Origial documentation for Builder.state_saving_rnn

def state_saving_rnn(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.state_saving_rnn to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.state_saving_rnn

def state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None)

RNN that accepts a state saver for time-truncated RNN calculation.

Args: cell: An instance of RNNCell. inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size]. state_saver: A state saver object with methods state and save_state. state_name: Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., cell.state_size is a tuple) then state_name should be a tuple of strings having the same length as cell.state_size. Otherwise it should be a single string. sequence_length: (optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length. scope: VariableScope for the created subgraph; defaults to "RNN".

Returns: A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state

Raises: TypeError: If cell is not an instance of RNNCell. ValueError: If inputs is None or an empty list, or if the arity and type of state_name does not match that of cell.state_size.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def state_saving_rnn_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.state_saving_rnn_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.state_saving_rnn_layer

Return

Applicative

Origial documentation for Builder.state_saving_rnn_layer

def state_saving_rnn_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.state_saving_rnn, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.state_saving_rnn

def state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None):

RNN that accepts a state saver for time-truncated RNN calculation.

Args: cell: An instance of RNNCell. inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size]. state_saver: A state saver object with methods state and save_state. state_name: Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., cell.state_size is a tuple) then state_name should be a tuple of strings having the same length as cell.state_size. Otherwise it should be a single string. sequence_length: (optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length. scope: VariableScope for the created subgraph; defaults to "RNN".

Returns: A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state

Raises: TypeError: If cell is not an instance of RNNCell. ValueError: If inputs is None or an empty list, or if the arity and type of state_name does not match that of cell.state_size.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def stop_gradient(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.stop_gradient, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.stop_gradient

Return

Applicative

Origial documentation for Builder.stop_gradient

def stop_gradient(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.stop_gradient to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.stop_gradient

def stop_gradient(input, name=None)

Stops gradient computation.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.

This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:

  • The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
  • Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
  • Adversarial training, where no backprop should happen through the adversarial example generation process.

Args: input: A Tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def stop_gradient_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.stop_gradient_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.stop_gradient_layer

Return

Applicative

Origial documentation for Builder.stop_gradient_layer

def stop_gradient_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.stop_gradient, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.stop_gradient

def stop_gradient(input, name=None):

Stops gradient computation.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.

This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:

  • The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
  • Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
  • Adversarial training, where no backprop should happen through the adversarial example generation process.

Args: input: A Tensor. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def store_on(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.store_on, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.store_on

Return

Applicative

Origial documentation for Builder.store_on

def store_on(builder, other):

None

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def strided_slice(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.strided_slice, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.strided_slice

Return

Applicative

Origial documentation for Builder.strided_slice

def strided_slice(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.strided_slice to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.strided_slice

def strided_slice(input_, begin, end, strides, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None)

Extracts a strided slice from a tensor.

To a first order, this operation extracts a slice of size end - begin from a tensor input starting at the location specified by begin. The slice continues by adding stride to the begin index until all dimensions are not less than end. Note that components of stride can be negative, which causes a reverse slice.

This operation can be thought of an encoding of a numpy style sliced range. Given a python slice input[, , ..., ] this function will be called as follows.

begin, end, and strides will be all length n. n is in general not the same dimensionality as input.

For the ith spec, begin_mask, end_mask, ellipsis_mask, new_axis_mask, and shrink_axis_mask will have the ith bit corresponding to the ith spec.

If the ith bit of begin_mask is non-zero, begin[i] is ignored and the fullest possible range in that dimension is used instead. end_mask works analogously, except with the end range.

foo[5:,:,:3] on a 7x8x9 tensor is equivalent to foo[5:7,0:8,0:3]. foo[::-1] reverses a tensor with shape 8.

If the ith bit of ellipsis_mask, as many unspecified dimensions as needed will be inserted between other dimensions. Only one non-zero bit is allowed in ellipsis_mask.

For example foo[3:5,...,4:5] on a shape 10x3x3x10 tensor is equivalent to foo[3:5,:,:,4:5] and foo[3:5,...] is equivalent to foo[3:5,:,:,:].

If the ith bit of new_axis_mask is one, then a begin, end, and stride are ignored and a new length 1 dimension is added at this point in the output tensor.

For example foo[3:5,4] on a 10x8 tensor produces a shape 2 tensor whereas foo[3:5,4:5] produces a shape 2x1 tensor with shrink_mask being 1<<1 == 2.

If the ith bit of shrink_axis_mask is one, then begin, end[i], and stride[i] are used to do a slice in the appropriate dimension, but the output tensor will be reduced in dimensionality by one. This is only valid if the ith entry of slice[i]==1.

NOTE: begin and end are zero-indexed.strides` entries must be non-zero.

```

'input' is [[[1, 1, 1], [2, 2, 2]],

[[3, 3, 3], [4, 4, 4]],

[[5, 5, 5], [6, 6, 6]]]

tf.slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4], [3, 3, 3]]] ```

Args: input_: A Tensor. begin: An int32 or int64 Tensor. end: An int32 or int64 Tensor. strides: An int32 or int64 Tensor. begin_mask: An int32 mask. end_mask: An int32 mask. ellipsis_mask: An int32 mask. new_axis_mask: An int32 mask. shrink_axis_mask: An int32 mask. var: The variable coresponding to input_ or None name: A name for the operation (optional).

Returns: A Tensor the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def strided_slice_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.strided_slice_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.strided_slice_layer

Return

Applicative

Origial documentation for Builder.strided_slice_layer

def strided_slice_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.strided_slice, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.strided_slice

def strided_slice(input_, begin, end, strides, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None):

Extracts a strided slice from a tensor.

To a first order, this operation extracts a slice of size end - begin from a tensor input starting at the location specified by begin. The slice continues by adding stride to the begin index until all dimensions are not less than end. Note that components of stride can be negative, which causes a reverse slice.

This operation can be thought of an encoding of a numpy style sliced range. Given a python slice input[, , ..., ] this function will be called as follows.

begin, end, and strides will be all length n. n is in general not the same dimensionality as input.

For the ith spec, begin_mask, end_mask, ellipsis_mask, new_axis_mask, and shrink_axis_mask will have the ith bit corresponding to the ith spec.

If the ith bit of begin_mask is non-zero, begin[i] is ignored and the fullest possible range in that dimension is used instead. end_mask works analogously, except with the end range.

foo[5:,:,:3] on a 7x8x9 tensor is equivalent to foo[5:7,0:8,0:3]. foo[::-1] reverses a tensor with shape 8.

If the ith bit of ellipsis_mask, as many unspecified dimensions as needed will be inserted between other dimensions. Only one non-zero bit is allowed in ellipsis_mask.

For example foo[3:5,...,4:5] on a shape 10x3x3x10 tensor is equivalent to foo[3:5,:,:,4:5] and foo[3:5,...] is equivalent to foo[3:5,:,:,:].

If the ith bit of new_axis_mask is one, then a begin, end, and stride are ignored and a new length 1 dimension is added at this point in the output tensor.

For example foo[3:5,4] on a 10x8 tensor produces a shape 2 tensor whereas foo[3:5,4:5] produces a shape 2x1 tensor with shrink_mask being 1<<1 == 2.

If the ith bit of shrink_axis_mask is one, then begin, end[i], and stride[i] are used to do a slice in the appropriate dimension, but the output tensor will be reduced in dimensionality by one. This is only valid if the ith entry of slice[i]==1.

NOTE: begin and end are zero-indexed.strides` entries must be non-zero.

```

'input' is [[[1, 1, 1], [2, 2, 2]],

[[3, 3, 3], [4, 4, 4]],

[[5, 5, 5], [6, 6, 6]]]

tf.slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4], [3, 3, 3]]] ```

Args: input_: A Tensor. begin: An int32 or int64 Tensor. end: An int32 or int64 Tensor. strides: An int32 or int64 Tensor. begin_mask: An int32 mask. end_mask: An int32 mask. ellipsis_mask: An int32 mask. new_axis_mask: An int32 mask. shrink_axis_mask: An int32 mask. var: The variable coresponding to input_ or None name: A name for the operation (optional).

Returns: A Tensor the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_join(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_join, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_join

Return

Applicative

Origial documentation for Builder.string_join

def string_join(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.string_join to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.string_join

def string_join(inputs, separator=None, name=None)

Joins the strings in the given list of string tensors into one tensor;

with the given separator (default is an empty separator).

Args: inputs: A list of at least 1 Tensor objects of type string. A list of string tensors. The tensors must all have the same shape, or be scalars. Scalars may be mixed in; these will be broadcast to the shape of non-scalar inputs. separator: An optional string. Defaults to "". string, an optional join separator. name: A name for the operation (optional).

Returns: A Tensor of type string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_join_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_join_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_join_layer

Return

Applicative

Origial documentation for Builder.string_join_layer

def string_join_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.string_join, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.string_join

def string_join(inputs, separator=None, name=None):

Joins the strings in the given list of string tensors into one tensor;

with the given separator (default is an empty separator).

Args: inputs: A list of at least 1 Tensor objects of type string. A list of string tensors. The tensors must all have the same shape, or be scalars. Scalars may be mixed in; these will be broadcast to the shape of non-scalar inputs. separator: An optional string. Defaults to "". string, an optional join separator. name: A name for the operation (optional).

Returns: A Tensor of type string.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_split(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_split, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_split

Return

Applicative

Origial documentation for Builder.string_split

def string_split(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.string_split to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.string_split

def string_split(source, delimiter=" ")

Split elements of source based on delimiter into a SparseTensor.

Let N be the size of source (typically N will be the batch size). Split each element of source based on delimiter and return a SparseTensor containing the splitted tokens. Empty tokens are ignored.

If delimiter is an empty string, each element of the source is split into individual 1 character strings.

For example: N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output will be

st.indices = [0, 0; 0, 1; 1, 0; 1, 1; 1, 2] st.shape = [2, 3] st.values = ['hello', 'world', 'a', 'b', 'c']

Args: source: 1-D string Tensor, the strings to split. delimiter: 0-D string Tensor, the delimiter character, the string should be length 0 or 1.

Returns: A SparseTensor of rank 2, the strings split according to the delimiter. The first column of the indices corresponds to the row in source and the second column corresponds to the index of the split component in this row.

Raises: ValueError: If delimiter is not a character.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_split_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_split_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_split_layer

Return

Applicative

Origial documentation for Builder.string_split_layer

def string_split_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.string_split, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.string_split

def string_split(source, delimiter=" "):

Split elements of source based on delimiter into a SparseTensor.

Let N be the size of source (typically N will be the batch size). Split each element of source based on delimiter and return a SparseTensor containing the splitted tokens. Empty tokens are ignored.

If delimiter is an empty string, each element of the source is split into individual 1 character strings.

For example: N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output will be

st.indices = [0, 0; 0, 1; 1, 0; 1, 1; 1, 2] st.shape = [2, 3] st.values = ['hello', 'world', 'a', 'b', 'c']

Args: source: 1-D string Tensor, the strings to split. delimiter: 0-D string Tensor, the delimiter character, the string should be length 0 or 1.

Returns: A SparseTensor of rank 2, the strings split according to the delimiter. The first column of the indices corresponds to the row in source and the second column corresponds to the index of the split component in this row.

Raises: ValueError: If delimiter is not a character.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_hash_bucket(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_hash_bucket, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_hash_bucket

Return

Applicative

Origial documentation for Builder.string_to_hash_bucket

def string_to_hash_bucket(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.string_to_hash_bucket to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.string_to_hash_bucket

def string_to_hash_bucket(string_tensor, num_buckets, name=None)

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process.

Note that the hash function may change from time to time. This functionality will be deprecated and it's recommended to use tf.string_to_hash_bucket_fast() or tf.string_to_hash_bucket_strong().

Args: string_tensor: A Tensor of type string. num_buckets: An int that is >= 1. The number of buckets. name: A name for the operation (optional).

Returns: A Tensor of type int64. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_hash_bucket_fast(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_hash_bucket_fast, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_hash_bucket_fast

Return

Applicative

Origial documentation for Builder.string_to_hash_bucket_fast

def string_to_hash_bucket_fast(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.string_to_hash_bucket_fast to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.string_to_hash_bucket_fast

def string_to_hash_bucket_fast(input, num_buckets, name=None)

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a strong hash function with tf.string_to_hash_bucket_strong.

Args: input: A Tensor of type string. The strings to assign a hash bucket. num_buckets: An int that is >= 1. The number of buckets. name: A name for the operation (optional).

Returns: A Tensor of type int64. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_hash_bucket_fast_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_hash_bucket_fast_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_hash_bucket_fast_layer

Return

Applicative

Origial documentation for Builder.string_to_hash_bucket_fast_layer

def string_to_hash_bucket_fast_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.string_to_hash_bucket_fast, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.string_to_hash_bucket_fast

def string_to_hash_bucket_fast(input, num_buckets, name=None):

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a strong hash function with tf.string_to_hash_bucket_strong.

Args: input: A Tensor of type string. The strings to assign a hash bucket. num_buckets: An int that is >= 1. The number of buckets. name: A name for the operation (optional).

Returns: A Tensor of type int64. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_hash_bucket_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_hash_bucket_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_hash_bucket_layer

Return

Applicative

Origial documentation for Builder.string_to_hash_bucket_layer

def string_to_hash_bucket_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.string_to_hash_bucket, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.string_to_hash_bucket

def string_to_hash_bucket(string_tensor, num_buckets, name=None):

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process.

Note that the hash function may change from time to time. This functionality will be deprecated and it's recommended to use tf.string_to_hash_bucket_fast() or tf.string_to_hash_bucket_strong().

Args: string_tensor: A Tensor of type string. num_buckets: An int that is >= 1. The number of buckets. name: A name for the operation (optional).

Returns: A Tensor of type int64. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_hash_bucket_strong(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_hash_bucket_strong, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_hash_bucket_strong

Return

Applicative

Origial documentation for Builder.string_to_hash_bucket_strong

def string_to_hash_bucket_strong(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.string_to_hash_bucket_strong to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.string_to_hash_bucket_strong

def string_to_hash_bucket_strong(input, num_buckets, key, name=None)

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process. The hash function is a keyed hash function, where attribute key defines the key of the hash function. key is an array of 2 elements.

A strong hash is important when inputs may be malicious, e.g. URLs with additional components. Adversaries could try to make their inputs hash to the same bucket for a denial-of-service attack or to skew the results. A strong hash prevents this by making it dificult, if not infeasible, to compute inputs that hash to the same bucket. This comes at a cost of roughly 4x higher compute time than tf.string_to_hash_bucket_fast.

Args: input: A Tensor of type string. The strings to assign a hash bucket. num_buckets: An int that is >= 1. The number of buckets. key: A list of ints. The key for the keyed hash function passed as a list of two uint64 elements. name: A name for the operation (optional).

Returns: A Tensor of type int64. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_hash_bucket_strong_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_hash_bucket_strong_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_hash_bucket_strong_layer

Return

Applicative

Origial documentation for Builder.string_to_hash_bucket_strong_layer

def string_to_hash_bucket_strong_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.string_to_hash_bucket_strong, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.string_to_hash_bucket_strong

def string_to_hash_bucket_strong(input, num_buckets, key, name=None):

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process. The hash function is a keyed hash function, where attribute key defines the key of the hash function. key is an array of 2 elements.

A strong hash is important when inputs may be malicious, e.g. URLs with additional components. Adversaries could try to make their inputs hash to the same bucket for a denial-of-service attack or to skew the results. A strong hash prevents this by making it dificult, if not infeasible, to compute inputs that hash to the same bucket. This comes at a cost of roughly 4x higher compute time than tf.string_to_hash_bucket_fast.

Args: input: A Tensor of type string. The strings to assign a hash bucket. num_buckets: An int that is >= 1. The number of buckets. key: A list of ints. The key for the keyed hash function passed as a list of two uint64 elements. name: A name for the operation (optional).

Returns: A Tensor of type int64. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_number(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_number, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_number

Return

Applicative

Origial documentation for Builder.string_to_number

def string_to_number(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.string_to_number to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.string_to_number

def string_to_number(string_tensor, out_type=None, name=None)

Converts each string in the input Tensor to the specified numeric type.

(Note that int32 overflow results in an error while float overflow results in a rounded value.)

Args: string_tensor: A Tensor of type string. out_type: An optional tf.DType from: tf.float32, tf.int32. Defaults to tf.float32. The numeric type to interpret each string in string_tensor as. name: A name for the operation (optional).

Returns: A Tensor of type out_type. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def string_to_number_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.string_to_number_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.string_to_number_layer

Return

Applicative

Origial documentation for Builder.string_to_number_layer

def string_to_number_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.string_to_number, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.string_to_number

def string_to_number(string_tensor, out_type=None, name=None):

Converts each string in the input Tensor to the specified numeric type.

(Note that int32 overflow results in an error while float overflow results in a rounded value.)

Args: string_tensor: A Tensor of type string. out_type: An optional tf.DType from: tf.float32, tf.int32. Defaults to tf.float32. The numeric type to interpret each string in string_tensor as. name: A name for the operation (optional).

Returns: A Tensor of type out_type. A Tensor of the same shape as the input string_tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sub(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sub, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sub

Return

Applicative

Origial documentation for Builder.sub

def sub(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.sub to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.sub

def sub(x, y, name=None)

Returns x - y element-wise.

NOTE: Sub supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sub_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sub_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sub_layer

Return

Applicative

Origial documentation for Builder.sub_layer

def sub_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.sub, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.sub

def sub(x, y, name=None):

Returns x - y element-wise.

NOTE: Sub supports broadcasting. More about broadcasting here

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sufficient_statistics(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sufficient_statistics, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sufficient_statistics

Return

Applicative

Origial documentation for Builder.sufficient_statistics

def sufficient_statistics(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.sufficient_statistics to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.sufficient_statistics

def sufficient_statistics(x, axes, shift=None, keep_dims=False, name=None)

Calculate the sufficient statistics for the mean and variance of x.

These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data

Args: x: A Tensor. axes: Array of ints. Axes along which to compute mean and variance. shift: A Tensor containing the value by which to shift the data for numerical stability, or None if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. keep_dims: produce statistics with the same dimensionality as the input. name: Name used to scope the operations that compute the sufficient stats.

Returns: Four Tensor objects of the same type as x: * the count (number of elements to average over). * the (possibly shifted) sum of the elements in the array. * the (possibly shifted) sum of squares of the elements in the array. * the shift by which the mean must be corrected or None if shift is None.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def sufficient_statistics_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.sufficient_statistics_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.sufficient_statistics_layer

Return

Applicative

Origial documentation for Builder.sufficient_statistics_layer

def sufficient_statistics_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.sufficient_statistics, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.sufficient_statistics

def sufficient_statistics(x, axes, shift=None, keep_dims=False, name=None):

Calculate the sufficient statistics for the mean and variance of x.

These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data

Args: x: A Tensor. axes: Array of ints. Axes along which to compute mean and variance. shift: A Tensor containing the value by which to shift the data for numerical stability, or None if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. keep_dims: produce statistics with the same dimensionality as the input. name: Name used to scope the operations that compute the sufficient stats.

Returns: Four Tensor objects of the same type as x: * the count (number of elements to average over). * the (possibly shifted) sum of the elements in the array. * the (possibly shifted) sum of squares of the elements in the array. * the shift by which the mean must be corrected or None if shift is None.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def svd(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.svd, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.svd

Return

Applicative

Origial documentation for Builder.svd

def svd(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.svd to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.svd

def svd(tensor, compute_uv=True, full_matrices=False, name=None)

Computes the singular value decompositions of one or more matrices.

Computes the SVD of each inner matrix in tensor such that tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])

```prettyprint

a is a tensor.

s is a tensor of singular values.

u is a tensor of left singular vectors.

v is a tensor of right singular vectors.

s, u, v = svd(a) s = svd(a, compute_uv=False) ```

Args: matrix: Tensor of shape [..., M, N]. Let P be the minimum of M and N. compute_uv: If True then left and right singular vectors will be computed and returned in u and v, respectively. Otherwise, only the singular values will be computed, which can be significantly faster. full_matrices: If true, compute full-sized u and v. If false (the default), compute only the leading P singular vectors. Ignored if compute_uv is False. name: string, optional name of the operation.

Returns: s: Singular values. Shape is [..., P]. u: Right singular vectors. If full_matrices is False (default) then shape is [..., M, P]; if full_matrices is True then shape is [..., M, M]. Not returned if compute_uv is False. v: Left singular vectors. If full_matrices is False (default) then shape is [..., N, P]. If full_matrices is True then shape is [..., N, N]. Not returned if compute_uv is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def svd_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.svd_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.svd_layer

Return

Applicative

Origial documentation for Builder.svd_layer

def svd_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.svd, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.svd

def svd(tensor, compute_uv=True, full_matrices=False, name=None):

Computes the singular value decompositions of one or more matrices.

Computes the SVD of each inner matrix in tensor such that tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])

```prettyprint

a is a tensor.

s is a tensor of singular values.

u is a tensor of left singular vectors.

v is a tensor of right singular vectors.

s, u, v = svd(a) s = svd(a, compute_uv=False) ```

Args: matrix: Tensor of shape [..., M, N]. Let P be the minimum of M and N. compute_uv: If True then left and right singular vectors will be computed and returned in u and v, respectively. Otherwise, only the singular values will be computed, which can be significantly faster. full_matrices: If true, compute full-sized u and v. If false (the default), compute only the leading P singular vectors. Ignored if compute_uv is False. name: string, optional name of the operation.

Returns: s: Singular values. Shape is [..., P]. u: Right singular vectors. If full_matrices is False (default) then shape is [..., M, P]; if full_matrices is True then shape is [..., M, M]. Not returned if compute_uv is False. v: Left singular vectors. If full_matrices is False (default) then shape is [..., N, P]. If full_matrices is True then shape is [..., N, N]. Not returned if compute_uv is False.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tan(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tan, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tan

Return

Applicative

Origial documentation for Builder.tan

def tan(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.tan to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.tan

def tan(x, name=None)

Computes tan of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tan_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tan_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tan_layer

Return

Applicative

Origial documentation for Builder.tan_layer

def tan_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.tan, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.tan

def tan(x, name=None):

Computes tan of x element-wise.

Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tanh(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tanh, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tanh

Return

Applicative

Origial documentation for Builder.tanh

def tanh(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.tanh to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.tanh

def tanh(x, name=None)

Computes hyperbolic tangent of x element-wise.

Args: x: A Tensor or SparseTensor with type float, double, int32, complex64, int64, or qint32. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor respectively with the same type as x if x.dtype != qint32 otherwise the return type is quint8.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tanh_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tanh_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tanh_layer

Return

Applicative

Origial documentation for Builder.tanh_layer

def tanh_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.tanh, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.tanh

def tanh(x, name=None):

Computes hyperbolic tangent of x element-wise.

Args: x: A Tensor or SparseTensor with type float, double, int32, complex64, int64, or qint32. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor respectively with the same type as x if x.dtype != qint32 otherwise the return type is quint8.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tensor(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tensor, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tensor

Return

Applicative

Origial documentation for Builder.tensor

def tensor(self):

Returns the Tensor contianed by the Builder

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tensors(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(BuilderTree.tensors, ...)

Arguments

  • All other *args and **kwargs are forwarded to BuilderTree.tensors

Return

Applicative

Origial documentation for BuilderTree.tensors

def tensors(self):

Same as tensorbuilder.core.builders.BuilderTree.builders but extracts the tensor from each tensorbuilder.core.builders.Builder.

Return

  • list( tf.Tensor )

Example

This examples creates a network to that solves the XOR problem using sigmoid units

import tensorflow as tf
from tensorbuilder import tb

x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.float32, shape=[None, 1])


#Network
[activation_tensor, trainer_tensor] = (
    tb.build(x)

    .sigmoid_layer(2)
    .linear_layer(1)

    .branch(lambda logit:
    [
        logit.sigmoid() # activation
    ,
        logit
        .sigmoid_cross_entropy_with_logits(y) # loss
        .map(tf.train.AdamOptimizer(0.01).minimize) # trainer
    ])
    .tensors()
)

Same example using the DSL

import tensorflow as tf
from tensorbuilder import tb

x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.float32, shape=[None, 1])


#Network
[activation_tensor, trainer_tensor] = tb.pipe(
    x,
    tb.sigmoid_layer(2)
    .linear_layer(1),
    [
        tb.sigmoid() # activation
    ,
        tb
        .sigmoid_cross_entropy_with_logits(y) # loss
        .map(tf.train.AdamOptimizer(0.01).minimize) # trainer
    ],
    tb.tensors()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def then(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.then, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.then

Return

Applicative

Origial documentation for Builder.then

def then(builder, fn):

@immutable

Expects a function fn with type builder -> builder. This method is used primarily to manupilate the Builder with very fine grain control through the fluent immutable API.

Parameters

  • fn: a function of type builder -> builder.

Return

  • tensorbuilder.core.builders.Builder

Example

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def then_with(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.then_with, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.then_with

Return

Applicative

Origial documentation for Builder.then_with

def then_with(builder, scope_fn):

@immutable

Expects a function fn with that returns a "Disposable" (implement __enter__ and __exit__) plus some *args and **kwargs, and return a function g that expects a function h of type Builder -> Builder such that

.then_with(fn, *args, **kwargs)(h)

roughly perform this computations (given the current builder)

with fn(*args, **kwargs):
    return h(builder)

For a more practical understanding look at the example.

Parameters

  • fn: a function of type Builder -> Disposable.

Return

  • Function of type (Builder -> Builder)

Examples

Create a network with 3 branches and execute each on the devices "/gpu:0", "/gpu:1", "cpu:3" respectively

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = (
    tb.build(x)
    .branch(lambda x: [
        x.then_with(tf.device, "/gpu:0")(lambda x:
            x.relu_layer(20)
            .linear_layer(5)
        )
    ,
        x.then_with(tf.device, "/gpu:1")(lambda x:
            x.sigmoid_layer(20)
            .linear_layer(5)
        )
    ,
        x.then_with(tf.device, "/cpu:0")(lambda x:
            x.tanh_layer(20)
            .linear_layer(5)
        )
    ])
    .reduce(tf.add)
    .softmax()
    .tensor()
)

This looks much better with the DSL thanks to its support for scopes

import tensorflow as tf
from tensorbuilder import tb

x = placeholder(tf.float32, shape=[None, 10])

h = tb.pipe(
    x,
    [
        { tf.device("/gpu:0"):
            tb.relu_layer(20)
            .linear_layer(5)
        }
    ,
        { tf.device("/gpu:1"):
            tb.sigmoid_layer(20)
            .linear_layer(5)
        }
    ,
        { tf.device("/cpu:0"):
            tb.tanh_layer(20)
            .linear_layer(5)
        }
    ],
    tb.reduce(tf.add)
    .softmax()
    .tensor()
)
def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tile(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tile, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tile

Return

Applicative

Origial documentation for Builder.tile

def tile(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.tile to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.tile

def tile(input, multiples, name=None)

Constructs a tensor by tiling a given tensor.

This operation creates a new tensor by replicating input multiples times. The output tensor's i'th dimension has input.dims(i) * multiples[i] elements, and the values of input are replicated multiples[i] times along the 'i'th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].

Args: input: A Tensor. 1-D or higher. multiples: A Tensor. Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tile_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tile_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tile_layer

Return

Applicative

Origial documentation for Builder.tile_layer

def tile_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.tile, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.tile

def tile(input, multiples, name=None):

Constructs a tensor by tiling a given tensor.

This operation creates a new tensor by replicating input multiples times. The output tensor's i'th dimension has input.dims(i) * multiples[i] elements, and the values of input are replicated multiples[i] times along the 'i'th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].

Args: input: A Tensor. 1-D or higher. multiples: A Tensor. Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input name: A name for the operation (optional).

Returns: A Tensor. Has the same type as input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_bfloat16(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_bfloat16, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_bfloat16

Return

Applicative

Origial documentation for Builder.to_bfloat16

def to_bfloat16(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.to_bfloat16 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.to_bfloat16

def to_bfloat16(x, name="ToBFloat16")

Casts a tensor to type bfloat16.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type bfloat16.

Raises: TypeError: If x cannot be cast to the bfloat16.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_bfloat16_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_bfloat16_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_bfloat16_layer

Return

Applicative

Origial documentation for Builder.to_bfloat16_layer

def to_bfloat16_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.to_bfloat16, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.to_bfloat16

def to_bfloat16(x, name="ToBFloat16"):

Casts a tensor to type bfloat16.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type bfloat16.

Raises: TypeError: If x cannot be cast to the bfloat16.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_double(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_double, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_double

Return

Applicative

Origial documentation for Builder.to_double

def to_double(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.to_double to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.to_double

def to_double(x, name="ToDouble")

Casts a tensor to type float64.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type float64.

Raises: TypeError: If x cannot be cast to the float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_double_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_double_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_double_layer

Return

Applicative

Origial documentation for Builder.to_double_layer

def to_double_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.to_double, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.to_double

def to_double(x, name="ToDouble"):

Casts a tensor to type float64.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type float64.

Raises: TypeError: If x cannot be cast to the float64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_float(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_float, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_float

Return

Applicative

Origial documentation for Builder.to_float

def to_float(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.to_float to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.to_float

def to_float(x, name="ToFloat")

Casts a tensor to type float32.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type float32.

Raises: TypeError: If x cannot be cast to the float32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_float_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_float_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_float_layer

Return

Applicative

Origial documentation for Builder.to_float_layer

def to_float_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.to_float, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.to_float

def to_float(x, name="ToFloat"):

Casts a tensor to type float32.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type float32.

Raises: TypeError: If x cannot be cast to the float32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_int32(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_int32, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_int32

Return

Applicative

Origial documentation for Builder.to_int32

def to_int32(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.to_int32 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.to_int32

def to_int32(x, name="ToInt32")

Casts a tensor to type int32.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type int32.

Raises: TypeError: If x cannot be cast to the int32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_int32_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_int32_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_int32_layer

Return

Applicative

Origial documentation for Builder.to_int32_layer

def to_int32_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.to_int32, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.to_int32

def to_int32(x, name="ToInt32"):

Casts a tensor to type int32.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type int32.

Raises: TypeError: If x cannot be cast to the int32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_int64(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_int64, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_int64

Return

Applicative

Origial documentation for Builder.to_int64

def to_int64(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.to_int64 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.to_int64

def to_int64(x, name="ToInt64")

Casts a tensor to type int64.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type int64.

Raises: TypeError: If x cannot be cast to the int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def to_int64_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.to_int64_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.to_int64_layer

Return

Applicative

Origial documentation for Builder.to_int64_layer

def to_int64_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.to_int64, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.to_int64

def to_int64(x, name="ToInt64"):

Casts a tensor to type int64.

Args: x: A Tensor or SparseTensor. name: A name for the operation (optional).

Returns: A Tensor or SparseTensor with same shape as x with type int64.

Raises: TypeError: If x cannot be cast to the int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def top_k(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.top_k, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.top_k

Return

Applicative

Origial documentation for Builder.top_k

def top_k(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.top_k to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.top_k

def top_k(input, k=1, sorted=True, name=None)

Finds values and indices of the k largest entries for the last dimension.

If the input is a vector (rank-1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

For matrices (resp. higher rank input), computes the top k entries in each row (resp. vector along the last dimension). Thus,

values.shape = indices.shape = input.shape[:-1] + [k]

If two elements are equal, the lower-index element appears first.

Args: input: 1-D or higher Tensor with last dimension at least k. k: 0-D int32 Tensor. Number of top elements to look for along the last dimension (along each row for matrices). sorted: If true the resulting k elements will be sorted by the values in descending order. name: Optional name for the operation.

Returns: values: The k largest elements along each last dimensional slice. indices: The indices of values within the last dimension of input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def top_k_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.top_k_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.top_k_layer

Return

Applicative

Origial documentation for Builder.top_k_layer

def top_k_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.top_k, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.top_k

def top_k(input, k=1, sorted=True, name=None):

Finds values and indices of the k largest entries for the last dimension.

If the input is a vector (rank-1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

For matrices (resp. higher rank input), computes the top k entries in each row (resp. vector along the last dimension). Thus,

values.shape = indices.shape = input.shape[:-1] + [k]

If two elements are equal, the lower-index element appears first.

Args: input: 1-D or higher Tensor with last dimension at least k. k: 0-D int32 Tensor. Number of top elements to look for along the last dimension (along each row for matrices). sorted: If true the resulting k elements will be sorted by the values in descending order. name: Optional name for the operation.

Returns: values: The k largest elements along each last dimensional slice. indices: The indices of values within the last dimension of input.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def trace(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.trace, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.trace

Return

Applicative

Origial documentation for Builder.trace

def trace(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.trace to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.trace

def trace(x, name=None)

Compute the trace of a tensor x.

trace(x) returns the sum of along the diagonal.

For example:

```python

'x' is [[1, 1],

[1, 1]]

tf.trace(x) ==> 2

'x' is [[1,2,3],

[4,5,6],

[7,8,9]]

tf.trace(x) ==> 15 ```

Args: x: 2-D tensor. name: A name for the operation (optional).

Returns: The trace of input tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def trace_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.trace_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.trace_layer

Return

Applicative

Origial documentation for Builder.trace_layer

def trace_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.trace, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.trace

def trace(x, name=None):

Compute the trace of a tensor x.

trace(x) returns the sum of along the diagonal.

For example:

```python

'x' is [[1, 1],

[1, 1]]

tf.trace(x) ==> 2

'x' is [[1,2,3],

[4,5,6],

[7,8,9]]

tf.trace(x) ==> 15 ```

Args: x: 2-D tensor. name: A name for the operation (optional).

Returns: The trace of input tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def trainable_variables(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.trainable_variables, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.trainable_variables

Return

Applicative

Origial documentation for Builder.trainable_variables

def trainable_variables(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.trainable_variables to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.trainable_variables

def trainable_variables()

Returns all variables created with trainable=True.

When passed trainable=True, the Variable() constructor automatically adds new variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. This convenience function returns the contents of that collection.

Returns: A list of Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def trainable_variables_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.trainable_variables_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.trainable_variables_layer

Return

Applicative

Origial documentation for Builder.trainable_variables_layer

def trainable_variables_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.trainable_variables, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.trainable_variables

def trainable_variables():

Returns all variables created with trainable=True.

When passed trainable=True, the Variable() constructor automatically adds new variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. This convenience function returns the contents of that collection.

Returns: A list of Variable objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def transpose(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.transpose, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.transpose

Return

Applicative

Origial documentation for Builder.transpose

def transpose(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.transpose to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.transpose

def transpose(a, perm=None, name="transpose")

Transposes a. Permutes the dimensions according to perm.

The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

For example:

```python

'x' is [[1 2 3]

[4 5 6]]

tf.transpose(x) ==> [[1 4] [2 5] [3 6]]

Equivalently

tf.transpose(x, perm=[1, 0]) ==> [[1 4] [2 5] [3 6]]

'perm' is more useful for n-dimensional tensors, for n > 2

'x' is [[[1 2 3]

[4 5 6]]

[[7 8 9]

[10 11 12]]]

Take the transpose of the matrices in dimension-0

tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4] [2 5] [3 6]]

                                 [[7 10]
                                  [8 11]
                                  [9 12]]]

```

Args: a: A Tensor. perm: A permutation of the dimensions of a. name: A name for the operation (optional).

Returns: A transposed Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def transpose_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.transpose_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.transpose_layer

Return

Applicative

Origial documentation for Builder.transpose_layer

def transpose_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.transpose, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.transpose

def transpose(a, perm=None, name="transpose"):

Transposes a. Permutes the dimensions according to perm.

The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

For example:

```python

'x' is [[1 2 3]

[4 5 6]]

tf.transpose(x) ==> [[1 4] [2 5] [3 6]]

Equivalently

tf.transpose(x, perm=[1, 0]) ==> [[1 4] [2 5] [3 6]]

'perm' is more useful for n-dimensional tensors, for n > 2

'x' is [[[1 2 3]

[4 5 6]]

[[7 8 9]

[10 11 12]]]

Take the transpose of the matrices in dimension-0

tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4] [2 5] [3 6]]

                                 [[7 10]
                                  [8 11]
                                  [9 12]]]

```

Args: a: A Tensor. perm: A permutation of the dimensions of a. name: A name for the operation (optional).

Returns: A transposed Tensor.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def truediv(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.truediv, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.truediv

Return

Applicative

Origial documentation for Builder.truediv

def truediv(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.truediv to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.truediv

def truediv(x, y, name=None)

Divides x / y elementwise, always producing floating point results.

The same as tf.div for floating point arguments, but casts integer arguments to floating point before dividing so that the result is always floating point. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.floordiv.

x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy).

Args: x: Tensor numerator of numeric type. y: Tensor denominator of numeric type. name: A name for the operation (optional).

Returns: x / y evaluated in floating point.

Raises: TypeError: If x and y have different dtypes.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def truediv_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.truediv_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.truediv_layer

Return

Applicative

Origial documentation for Builder.truediv_layer

def truediv_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.truediv, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.truediv

def truediv(x, y, name=None):

Divides x / y elementwise, always producing floating point results.

The same as tf.div for floating point arguments, but casts integer arguments to floating point before dividing so that the result is always floating point. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.floordiv.

x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy).

Args: x: Tensor numerator of numeric type. y: Tensor denominator of numeric type. name: A name for the operation (optional).

Returns: x / y evaluated in floating point.

Raises: TypeError: If x and y have different dtypes.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def truncated_normal(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.truncated_normal, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.truncated_normal

Return

Applicative

Origial documentation for Builder.truncated_normal

def truncated_normal(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.truncated_normal to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.truncated_normal

def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=<dtype: 'float32'>, seed=None, name=None)

Outputs random values from a truncated normal distribution.

The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of the specified shape filled with random truncated normal values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def truncated_normal_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.truncated_normal_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.truncated_normal_initializer

Return

Applicative

Origial documentation for Builder.truncated_normal_initializer

def truncated_normal_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.truncated_normal_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.truncated_normal_initializer

def truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=<dtype: 'float32'>)

Returns an initializer that generates a truncated normal distribution.

These values are similar to values from a random_normal_initializer except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters.

Args: mean: a python scalar or a scalar tensor. Mean of the random values to generate. stddev: a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type. Only floating point types are supported.

Returns: An initializer that generates tensors with a truncated normal distribution.

Raises: ValueError: if dtype is not a floating point type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def truncated_normal_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.truncated_normal_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.truncated_normal_initializer_layer

Return

Applicative

Origial documentation for Builder.truncated_normal_initializer_layer

def truncated_normal_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.truncated_normal_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.truncated_normal_initializer

def truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=<dtype: 'float32'>):

Returns an initializer that generates a truncated normal distribution.

These values are similar to values from a random_normal_initializer except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters.

Args: mean: a python scalar or a scalar tensor. Mean of the random values to generate. stddev: a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type. Only floating point types are supported.

Returns: An initializer that generates tensors with a truncated normal distribution.

Raises: ValueError: if dtype is not a floating point type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def truncated_normal_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.truncated_normal_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.truncated_normal_layer

Return

Applicative

Origial documentation for Builder.truncated_normal_layer

def truncated_normal_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.truncated_normal, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.truncated_normal

def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=<dtype: 'float32'>, seed=None, name=None):

Outputs random values from a truncated normal distribution.

The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

Args: shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See set_random_seed for behavior. name: A name for the operation (optional).

Returns: A tensor of the specified shape filled with random truncated normal values.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tuple(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tuple, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tuple

Return

Applicative

Origial documentation for Builder.tuple

def tuple(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.tuple to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.tuple

def tuple(tensors, name=None, control_inputs=None)

Group tensors together.

This creates a tuple of tensors with the same values as the tensors argument, except that the value of each tensor is only returned after the values of all tensors have been computed.

control_inputs contains additional ops that have to finish before this op finishes, but whose outputs are not returned.

This can be used as a "join" mechanism for parallel computations: all the argument tensors can be computed in parallel, but the values of any tensor returned by tuple are only available after all the parallel computations are done.

See also group and with_dependencies.

Args: tensors: A list of Tensors or IndexedSlices, some entries can be None. name: (optional) A name to use as a name_scope for the operation. control_inputs: List of additional ops to finish before returning.

Returns: Same as tensors.

Raises: ValueError: If tensors does not contain any Tensor or IndexedSlices. TypeError: If control_inputs is not a list of Operation or Tensor objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def tuple_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.tuple_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.tuple_layer

Return

Applicative

Origial documentation for Builder.tuple_layer

def tuple_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.tuple, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.tuple

def tuple(tensors, name=None, control_inputs=None):

Group tensors together.

This creates a tuple of tensors with the same values as the tensors argument, except that the value of each tensor is only returned after the values of all tensors have been computed.

control_inputs contains additional ops that have to finish before this op finishes, but whose outputs are not returned.

This can be used as a "join" mechanism for parallel computations: all the argument tensors can be computed in parallel, but the values of any tensor returned by tuple are only available after all the parallel computations are done.

See also group and with_dependencies.

Args: tensors: A list of Tensors or IndexedSlices, some entries can be None. name: (optional) A name to use as a name_scope for the operation. control_inputs: List of additional ops to finish before returning.

Returns: Same as tensors.

Raises: ValueError: If tensors does not contain any Tensor or IndexedSlices. TypeError: If control_inputs is not a list of Operation or Tensor objects.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def uniform_candidate_sampler(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.uniform_candidate_sampler, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.uniform_candidate_sampler

Return

Applicative

Origial documentation for Builder.uniform_candidate_sampler

def uniform_candidate_sampler(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.uniform_candidate_sampler to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.uniform_candidate_sampler

def uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)

Samples a set of classes using a uniform base distribution.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution for this operation is the uniform distribution over the range of integers [0, range_max).

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def uniform_candidate_sampler_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.uniform_candidate_sampler_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.uniform_candidate_sampler_layer

Return

Applicative

Origial documentation for Builder.uniform_candidate_sampler_layer

def uniform_candidate_sampler_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.uniform_candidate_sampler, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.uniform_candidate_sampler

def uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None):

Samples a set of classes using a uniform base distribution.

This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max).

The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution.

The base distribution for this operation is the uniform distribution over the range of integers [0, range_max).

In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately.

Args: true_classes: A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true: An int. The number of target classes per training example. num_sampled: An int. The number of classes to randomly sample per batch. unique: A bool. Determines whether all sampled classes in a batch are unique. range_max: An int. The number of possible classes. seed: An int. An operation-specific seed. Default is 0. name: A name for the operation (optional).

Returns: sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def uniform_unit_scaling_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.uniform_unit_scaling_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.uniform_unit_scaling_initializer

Return

Applicative

Origial documentation for Builder.uniform_unit_scaling_initializer

def uniform_unit_scaling_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.uniform_unit_scaling_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.uniform_unit_scaling_initializer

def uniform_unit_scaling_initializer(factor=1.0, seed=None, dtype=<dtype: 'float32'>)

Returns an initializer that generates tensors without scaling variance.

When initializing a deep network, it is in principle advantageous to keep the scale of the input variance constant, so it does not explode or diminish by reaching the final layer. If the input is x and the operation x * W, and we want to initialize W uniformly at random, we need to pick W from

[-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]

to keep the scale intact, where dim = W.shape[0] (the size of the input). A similar calculation for convolutional networks gives an analogous result with dim equal to the product of the first 3 dimensions. When nonlinearities are present, we need to multiply this by a constant factor. See Sussillo et al., 2014 (pdf) for deeper motivation, experiments and the calculation of constants. In section 2.3 there, the constants were numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.

Args: factor: Float. A multiplicative factor by which the values will be scaled. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type. Only floating point types are supported.

Returns: An initializer that generates tensors with unit variance.

Raises: ValueError: if dtype is not a floating point type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def uniform_unit_scaling_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.uniform_unit_scaling_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.uniform_unit_scaling_initializer_layer

Return

Applicative

Origial documentation for Builder.uniform_unit_scaling_initializer_layer

def uniform_unit_scaling_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.uniform_unit_scaling_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.uniform_unit_scaling_initializer

def uniform_unit_scaling_initializer(factor=1.0, seed=None, dtype=<dtype: 'float32'>):

Returns an initializer that generates tensors without scaling variance.

When initializing a deep network, it is in principle advantageous to keep the scale of the input variance constant, so it does not explode or diminish by reaching the final layer. If the input is x and the operation x * W, and we want to initialize W uniformly at random, we need to pick W from

[-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]

to keep the scale intact, where dim = W.shape[0] (the size of the input). A similar calculation for convolutional networks gives an analogous result with dim equal to the product of the first 3 dimensions. When nonlinearities are present, we need to multiply this by a constant factor. See Sussillo et al., 2014 (pdf) for deeper motivation, experiments and the calculation of constants. In section 2.3 there, the constants were numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.

Args: factor: Float. A multiplicative factor by which the values will be scaled. seed: A Python integer. Used to create random seeds. See set_random_seed for behavior. dtype: The data type. Only floating point types are supported.

Returns: An initializer that generates tensors with unit variance.

Raises: ValueError: if dtype is not a floating point type.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unique(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unique, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unique

Return

Applicative

Origial documentation for Builder.unique

def unique(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.unique to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.unique

def unique(x, out_idx=None, name=None)

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. In other words:

y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]

For example:

```prettyprint

tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]

y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] ```

Args: x: A Tensor. 1-D. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (y, idx). y: A Tensor. Has the same type as x. 1-D. idx: A Tensor of type out_idx. 1-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unique_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unique_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unique_layer

Return

Applicative

Origial documentation for Builder.unique_layer

def unique_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.unique, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.unique

def unique(x, out_idx=None, name=None):

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. In other words:

y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]

For example:

```prettyprint

tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]

y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] ```

Args: x: A Tensor. 1-D. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (y, idx). y: A Tensor. Has the same type as x. 1-D. idx: A Tensor of type out_idx. 1-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unique_with_counts(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unique_with_counts, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unique_with_counts

Return

Applicative

Origial documentation for Builder.unique_with_counts

def unique_with_counts(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.unique_with_counts to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.unique_with_counts

def unique_with_counts(x, out_idx=None, name=None)

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. Finally, it returns a third tensor count that contains the count of each element of y in x. In other words:

y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]

For example:

```prettyprint

tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]

y, idx, count = unique_with_counts(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2] ```

Args: x: A Tensor. 1-D. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (y, idx, count). y: A Tensor. Has the same type as x. 1-D. idx: A Tensor of type out_idx. 1-D. count: A Tensor of type out_idx. 1-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unique_with_counts_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unique_with_counts_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unique_with_counts_layer

Return

Applicative

Origial documentation for Builder.unique_with_counts_layer

def unique_with_counts_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.unique_with_counts, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.unique_with_counts

def unique_with_counts(x, out_idx=None, name=None):

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. Finally, it returns a third tensor count that contains the count of each element of y in x. In other words:

y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]

For example:

```prettyprint

tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]

y, idx, count = unique_with_counts(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2] ```

Args: x: A Tensor. 1-D. out_idx: An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name: A name for the operation (optional).

Returns: A tuple of Tensor objects (y, idx, count). y: A Tensor. Has the same type as x. 1-D. idx: A Tensor of type out_idx. 1-D. count: A Tensor of type out_idx. 1-D.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unpack(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unpack, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unpack

Return

Applicative

Origial documentation for Builder.unpack

def unpack(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.unpack to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.unpack

def unpack(value, num=None, axis=0, name="unpack")

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

Unpacks num tensors from value by chipping it along the axis dimension. If num is not specified (the default), it is inferred from value's shape. If value.shape[axis] is not known, ValueError is raised.

For example, given a tensor of shape (A, B, C, D);

If axis == 0 then the i'th tensor in output is the slice value[i, :, :, :] and each tensor in output will have shape (B, C, D). (Note that the dimension unpacked along is gone, unlike split).

If axis == 1 then the i'th tensor in output is the slice value[:, i, :, :] and each tensor in output will have shape (A, C, D). Etc.

This is the opposite of pack. The numpy equivalent is

tf.unpack(x, n) = list(x)

Args: value: A rank R > 0 Tensor to be unpacked. num: An int. The length of the dimension axis. Automatically inferred if None (the default). axis: An int. The axis to unpack along. Defaults to the first dimension. Supports negative indexes. name: A name for the operation (optional).

Returns: The list of Tensor objects unpacked from value.

Raises: ValueError: If num is unspecified and cannot be inferred. ValueError: If axis is out of the range [-R, R).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unpack_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unpack_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unpack_layer

Return

Applicative

Origial documentation for Builder.unpack_layer

def unpack_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.unpack, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.unpack

def unpack(value, num=None, axis=0, name="unpack"):

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

Unpacks num tensors from value by chipping it along the axis dimension. If num is not specified (the default), it is inferred from value's shape. If value.shape[axis] is not known, ValueError is raised.

For example, given a tensor of shape (A, B, C, D);

If axis == 0 then the i'th tensor in output is the slice value[i, :, :, :] and each tensor in output will have shape (B, C, D). (Note that the dimension unpacked along is gone, unlike split).

If axis == 1 then the i'th tensor in output is the slice value[:, i, :, :] and each tensor in output will have shape (A, C, D). Etc.

This is the opposite of pack. The numpy equivalent is

tf.unpack(x, n) = list(x)

Args: value: A rank R > 0 Tensor to be unpacked. num: An int. The length of the dimension axis. Automatically inferred if None (the default). axis: An int. The axis to unpack along. Defaults to the first dimension. Supports negative indexes. name: A name for the operation (optional).

Returns: The list of Tensor objects unpacked from value.

Raises: ValueError: If num is unspecified and cannot be inferred. ValueError: If axis is out of the range [-R, R).

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unsorted_segment_sum(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unsorted_segment_sum, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unsorted_segment_sum

Return

Applicative

Origial documentation for Builder.unsorted_segment_sum

def unsorted_segment_sum(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.unsorted_segment_sum to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.unsorted_segment_sum

def unsorted_segment_sum(data, segment_ids, num_segments, name=None)

Computes the sum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that (output[i] = sum_{j...} data[j...] where the sum is over tuples j... such that segment_ids[j...] == i. Unlike SegmentSum, segment_ids need not be sorted and need not cover all values in the full range of valid values.

If the sum is empty for a given segment ID i, output[i] = 0.

num_segments should equal the number of distinct segment IDs.

Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape. num_segments: A Tensor of type int32. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for the first segment_ids.rank dimensions, which are replaced with a single dimension which has size num_segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def unsorted_segment_sum_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.unsorted_segment_sum_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.unsorted_segment_sum_layer

Return

Applicative

Origial documentation for Builder.unsorted_segment_sum_layer

def unsorted_segment_sum_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.unsorted_segment_sum, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.unsorted_segment_sum

def unsorted_segment_sum(data, segment_ids, num_segments, name=None):

Computes the sum along segments of a tensor.

Read the section on Segmentation for an explanation of segments.

Computes a tensor such that (output[i] = sum_{j...} data[j...] where the sum is over tuples j... such that segment_ids[j...] == i. Unlike SegmentSum, segment_ids need not be sorted and need not cover all values in the full range of valid values.

If the sum is empty for a given segment ID i, output[i] = 0.

num_segments should equal the number of distinct segment IDs.

Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape. num_segments: A Tensor of type int32. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as data. Has same shape as data, except for the first segment_ids.rank dimensions, which are replaced with a single dimension which has size num_segments.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def variable_axis_size_partitioner(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.variable_axis_size_partitioner, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.variable_axis_size_partitioner

Return

Applicative

Origial documentation for Builder.variable_axis_size_partitioner

def variable_axis_size_partitioner(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.variable_axis_size_partitioner to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.variable_axis_size_partitioner

def variable_axis_size_partitioner(max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None)

Get a partitioner for VariableScope to keep shards below max_shard_bytes.

This partitioner will shard a Variable along one axis, attempting to keep the maximum shard size below max_shard_bytes. In practice, this is not always possible when sharding along only one axis. When this happens, this axis is sharded as much as possible (i.e., every dimension becomes a separate shard).

If the partitioner hits the max_shards limit, then each shard may end up larger than max_shard_bytes. By default max_shards equals None and no limit on the number of shards is enforced.

One reasonable value for max_shard_bytes is (64 << 20) - 1, or almost 64MB, to keep below the protobuf byte limit.

Args: max_shard_bytes: The maximum size any given shard is allowed to be. axis: The axis to partition along. Default: outermost axis. bytes_per_string_element: If the Variable is of type string, this provides an estimate of how large each scalar in the Variable is. max_shards: The maximum number of shards in int created taking precedence over max_shard_bytes.

Returns: A partition function usable as the partitioner argument to variable_scope, get_variable, and get_partitioned_variable_list.

Raises: ValueError: If any of the byte counts are non-positive.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def variable_axis_size_partitioner_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.variable_axis_size_partitioner_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.variable_axis_size_partitioner_layer

Return

Applicative

Origial documentation for Builder.variable_axis_size_partitioner_layer

def variable_axis_size_partitioner_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.variable_axis_size_partitioner, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.variable_axis_size_partitioner

def variable_axis_size_partitioner(max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None):

Get a partitioner for VariableScope to keep shards below max_shard_bytes.

This partitioner will shard a Variable along one axis, attempting to keep the maximum shard size below max_shard_bytes. In practice, this is not always possible when sharding along only one axis. When this happens, this axis is sharded as much as possible (i.e., every dimension becomes a separate shard).

If the partitioner hits the max_shards limit, then each shard may end up larger than max_shard_bytes. By default max_shards equals None and no limit on the number of shards is enforced.

One reasonable value for max_shard_bytes is (64 << 20) - 1, or almost 64MB, to keep below the protobuf byte limit.

Args: max_shard_bytes: The maximum size any given shard is allowed to be. axis: The axis to partition along. Default: outermost axis. bytes_per_string_element: If the Variable is of type string, this provides an estimate of how large each scalar in the Variable is. max_shards: The maximum number of shards in int created taking precedence over max_shard_bytes.

Returns: A partition function usable as the partitioner argument to variable_scope, get_variable, and get_partitioned_variable_list.

Raises: ValueError: If any of the byte counts are non-positive.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def variable_op_scope(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.variable_op_scope, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.variable_op_scope

Return

Applicative

Origial documentation for Builder.variable_op_scope

def variable_op_scope(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.variable_op_scope to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.variable_op_scope

def variable_op_scope()

Deprecated: context manager for defining an op that creates variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def variable_op_scope_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.variable_op_scope_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.variable_op_scope_layer

Return

Applicative

Origial documentation for Builder.variable_op_scope_layer

def variable_op_scope_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.variable_op_scope, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.variable_op_scope

def variable_op_scope():

Deprecated: context manager for defining an op that creates variables.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def verify_tensor_all_finite(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.verify_tensor_all_finite, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.verify_tensor_all_finite

Return

Applicative

Origial documentation for Builder.verify_tensor_all_finite

def verify_tensor_all_finite(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.verify_tensor_all_finite to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.verify_tensor_all_finite

def verify_tensor_all_finite(t, msg, name=None)

Assert that the tensor does not contain any NaN's or Inf's.

Args: t: Tensor to check. msg: Message to log on failure. name: A name for this operation (optional).

Returns: Same tensor as t.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def verify_tensor_all_finite_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.verify_tensor_all_finite_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.verify_tensor_all_finite_layer

Return

Applicative

Origial documentation for Builder.verify_tensor_all_finite_layer

def verify_tensor_all_finite_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.verify_tensor_all_finite, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.verify_tensor_all_finite

def verify_tensor_all_finite(t, msg, name=None):

Assert that the tensor does not contain any NaN's or Inf's.

Args: t: Tensor to check. msg: Message to log on failure. name: A name for this operation (optional).

Returns: Same tensor as t.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def weighted_cross_entropy_with_logits(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.weighted_cross_entropy_with_logits, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.weighted_cross_entropy_with_logits

Return

Applicative

Origial documentation for Builder.weighted_cross_entropy_with_logits

def weighted_cross_entropy_with_logits(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.weighted_cross_entropy_with_logits to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.weighted_cross_entropy_with_logits

def weighted_cross_entropy_with_logits(logits, targets, pos_weight, name=None)

Computes a weighted cross entropy.

This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.

The usual cross-entropy cost is defined as:

targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))

The argument pos_weight is used as a multiplier for the positive targets:

targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits))

For brevity, let x = logits, z = targets, q = pos_weight. The loss is:

  qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + (qz +  1 - z) * log(1 + exp(-x))
= (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))

Setting l = (1 + (q - 1) * z), to ensure stability and avoid overflow, the implementation uses

(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))

logits and targets must have the same type and shape.

Args: logits: A Tensor of type float32 or float64. targets: A Tensor of the same type and shape as logits. pos_weight: A coefficient to use on the positive examples. name: A name for the operation (optional).

Returns: A Tensor of the same shape as logits with the componentwise weightedlogistic losses.

Raises: ValueError: If logits and targets do not have the same shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def weighted_cross_entropy_with_logits_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.weighted_cross_entropy_with_logits_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.weighted_cross_entropy_with_logits_layer

Return

Applicative

Origial documentation for Builder.weighted_cross_entropy_with_logits_layer

def weighted_cross_entropy_with_logits_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.weighted_cross_entropy_with_logits, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.weighted_cross_entropy_with_logits

def weighted_cross_entropy_with_logits(logits, targets, pos_weight, name=None):

Computes a weighted cross entropy.

This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.

The usual cross-entropy cost is defined as:

targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))

The argument pos_weight is used as a multiplier for the positive targets:

targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits))

For brevity, let x = logits, z = targets, q = pos_weight. The loss is:

  qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + (qz +  1 - z) * log(1 + exp(-x))
= (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))

Setting l = (1 + (q - 1) * z), to ensure stability and avoid overflow, the implementation uses

(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))

logits and targets must have the same type and shape.

Args: logits: A Tensor of type float32 or float64. targets: A Tensor of the same type and shape as logits. pos_weight: A coefficient to use on the positive examples. name: A name for the operation (optional).

Returns: A Tensor of the same shape as logits with the componentwise weightedlogistic losses.

Raises: ValueError: If logits and targets do not have the same shape.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def where(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.where, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.where

Return

Applicative

Origial documentation for Builder.where

def where(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.where to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.where

def where(input, name=None)

Returns locations of true values in a boolean tensor.

This operation returns the coordinates of true elements in input. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Keep in mind, the shape of the output tensor can vary depending on how many true values there are in input. Indices are output in row-major order.

For example:

```prettyprint

'input' tensor is [[True, False]

[True, False]]

'input' has two true values, so output has two coordinates.

'input' has rank of 2, so coordinates have two indices.

where(input) ==> [[0, 0], [1, 0]]

input tensor is [[[True, False]

[True, False]]

[[False, True]

[False, True]]

[[False, False]

[False, True]]]

'input' has 5 true values, so output has 5 coordinates.

'input' has rank of 3, so coordinates have three indices.

where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]] ```

Args: input: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def where_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.where_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.where_layer

Return

Applicative

Origial documentation for Builder.where_layer

def where_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.where, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.where

def where(input, name=None):

Returns locations of true values in a boolean tensor.

This operation returns the coordinates of true elements in input. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Keep in mind, the shape of the output tensor can vary depending on how many true values there are in input. Indices are output in row-major order.

For example:

```prettyprint

'input' tensor is [[True, False]

[True, False]]

'input' has two true values, so output has two coordinates.

'input' has rank of 2, so coordinates have two indices.

where(input) ==> [[0, 0], [1, 0]]

input tensor is [[[True, False]

[True, False]]

[[False, True]

[False, True]]

[[False, False]

[False, True]]]

'input' has 5 true values, so output has 5 coordinates.

'input' has rank of 3, so coordinates have three indices.

where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]] ```

Args: input: A Tensor of type bool. name: A name for the operation (optional).

Returns: A Tensor of type int64.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def while_loop(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.while_loop, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.while_loop

Return

Applicative

Origial documentation for Builder.while_loop

def while_loop(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.while_loop to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.while_loop

def while_loop(cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)

Repeat body while the condition cond is true.

cond is a callable returning a boolean scalar tensor. body is a callable returning a (possibly nested) tuple or list of tensors of the same arity (length and structure) and types as loop_vars. loop_vars is a (possibly nested) tuple or list of tensors that is passed to both cond and body. cond and body both take as many arguments as there are loop_vars.

While cond evaluates to true, body is executed.

In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations.

For correctness, tf.while_loop() strictly enforces shape invariants for the loop variables. A shape invariant is a (possibly partial) shape that is unchanged across the iterations of the loop. An error will be raised if the shape of a loop variable after an iteration is determined to be more general than or incompatible with its shape invariant. For example, a shape of [11, None] is more general than a shape of [11, 17], and [11, 21] is not compatible with [11, 17]. By default (if the argument shape_invariants is not specified), it is assumed that the initial shape of each tensor in loop_vars is the same in every iteration. The shape_invariants argument allows the caller to specify a less specific shape invariant for each loop variable, which is needed if the shape varies between iterations. The Tensor.set_shape() function may also be used in the body function to indicate that the output loop variable has a particular shape. The shape invariant for SparseTensor and IndexedSlices are treated specially as follows:

a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.shape property. It must be the shape of a vector.

b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]).

while_loop implements non-strict semantics, enabling multiple iterations to run in parallel. The maximum number of parallel iterations can be controlled by parallel_iterations, which gives users some control over memory consumption and execution order. For correct programs, while_loop should return the same result for any parallel_iterations > 0.

For training, TensorFlow remembers the tensors that are produced in the forward inference but needed in back propagation. These tensors can be a main source of memory consumption and often cause OOM problems when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches.

Args: cond: A callable that represents the termination condition of the loop. body: A callable that represents the loop body. loop_vars: A (possibly nested) tuple or list of numpy array, Tensor, and TensorArray objects. shape_invariants: The shape invariants for the loop variables. parallel_iterations: The number of iterations allowed to run in parallel. back_prop: Whether backprop is enabled for this while loop. swap_memory: Whether GPU-CPU memory swap is enabled for this loop. name: Optional name prefix for the returned tensors.

Returns: The output tensors for the loop variables after the loop. When the length of loop_vars is 1 this is a Tensor, TensorArray or IndexedSlice and when the length of loop_vars is greater than 1 it returns a list.

Raises: TypeError: if cond or body is not callable. ValueError: if loop_vars is empty.

Example:

python i = tf.constant(0) c = lambda i: tf.less(i, 10) b = lambda i: tf.add(i, 1) r = tf.while_loop(c, b, [i])

Example with nesting:

python ijk_0 = (tf.constant(0), (tf.constant(1), tf.constant(2))) c = lambda i, (j, k): i < 10 b = lambda i, (j, k): (i + 1, ((j + k), (j - k))) ijk_final = tf.while_loop(c, b, ijk_0)

Example using shape_invariants:

python i0 = tf.constant(0) m0 = tf.ones([2, 2]) c = lambda i, m: i < 10 b = lambda i, m: [i+1, tf.concat(0, [m, m])] tf.while_loop( c, b, loop_vars=[i0, m0], shape_invariants=[i0.get_shape(), tensor_shape.TensorShape([None, 2])])

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def while_loop_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.while_loop_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.while_loop_layer

Return

Applicative

Origial documentation for Builder.while_loop_layer

def while_loop_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.while_loop, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.while_loop

def while_loop(cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None):

Repeat body while the condition cond is true.

cond is a callable returning a boolean scalar tensor. body is a callable returning a (possibly nested) tuple or list of tensors of the same arity (length and structure) and types as loop_vars. loop_vars is a (possibly nested) tuple or list of tensors that is passed to both cond and body. cond and body both take as many arguments as there are loop_vars.

While cond evaluates to true, body is executed.

In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations.

For correctness, tf.while_loop() strictly enforces shape invariants for the loop variables. A shape invariant is a (possibly partial) shape that is unchanged across the iterations of the loop. An error will be raised if the shape of a loop variable after an iteration is determined to be more general than or incompatible with its shape invariant. For example, a shape of [11, None] is more general than a shape of [11, 17], and [11, 21] is not compatible with [11, 17]. By default (if the argument shape_invariants is not specified), it is assumed that the initial shape of each tensor in loop_vars is the same in every iteration. The shape_invariants argument allows the caller to specify a less specific shape invariant for each loop variable, which is needed if the shape varies between iterations. The Tensor.set_shape() function may also be used in the body function to indicate that the output loop variable has a particular shape. The shape invariant for SparseTensor and IndexedSlices are treated specially as follows:

a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.shape property. It must be the shape of a vector.

b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]).

while_loop implements non-strict semantics, enabling multiple iterations to run in parallel. The maximum number of parallel iterations can be controlled by parallel_iterations, which gives users some control over memory consumption and execution order. For correct programs, while_loop should return the same result for any parallel_iterations > 0.

For training, TensorFlow remembers the tensors that are produced in the forward inference but needed in back propagation. These tensors can be a main source of memory consumption and often cause OOM problems when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches.

Args: cond: A callable that represents the termination condition of the loop. body: A callable that represents the loop body. loop_vars: A (possibly nested) tuple or list of numpy array, Tensor, and TensorArray objects. shape_invariants: The shape invariants for the loop variables. parallel_iterations: The number of iterations allowed to run in parallel. back_prop: Whether backprop is enabled for this while loop. swap_memory: Whether GPU-CPU memory swap is enabled for this loop. name: Optional name prefix for the returned tensors.

Returns: The output tensors for the loop variables after the loop. When the length of loop_vars is 1 this is a Tensor, TensorArray or IndexedSlice and when the length of loop_vars is greater than 1 it returns a list.

Raises: TypeError: if cond or body is not callable. ValueError: if loop_vars is empty.

Example:

python i = tf.constant(0) c = lambda i: tf.less(i, 10) b = lambda i: tf.add(i, 1) r = tf.while_loop(c, b, [i])

Example with nesting:

python ijk_0 = (tf.constant(0), (tf.constant(1), tf.constant(2))) c = lambda i, (j, k): i < 10 b = lambda i, (j, k): (i + 1, ((j + k), (j - k))) ijk_final = tf.while_loop(c, b, ijk_0)

Example using shape_invariants:

python i0 = tf.constant(0) m0 = tf.ones([2, 2]) c = lambda i, m: i < 10 b = lambda i, m: [i+1, tf.concat(0, [m, m])] tf.while_loop( c, b, loop_vars=[i0, m0], shape_invariants=[i0.get_shape(), tensor_shape.TensorShape([None, 2])])

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def xw_plus_b(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.xw_plus_b, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.xw_plus_b

Return

Applicative

Origial documentation for Builder.xw_plus_b

def xw_plus_b(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.xw_plus_b to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.xw_plus_b

def xw_plus_b(x, weights, biases, name=None)

Computes matmul(x, weights) + biases.

Args: x: a 2D tensor. Dimensions typically: batch, in_units weights: a 2D tensor. Dimensions typically: in_units, out_units biases: a 1D tensor. Dimensions: out_units name: A name for the operation (optional). If not specified "xw_plus_b" is used.

Returns: A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def xw_plus_b_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.xw_plus_b_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.xw_plus_b_layer

Return

Applicative

Origial documentation for Builder.xw_plus_b_layer

def xw_plus_b_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.xw_plus_b, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.xw_plus_b

def xw_plus_b(x, weights, biases, name=None):

Computes matmul(x, weights) + biases.

Args: x: a 2D tensor. Dimensions typically: batch, in_units weights: a 2D tensor. Dimensions typically: in_units, out_units biases: a 1D tensor. Dimensions: out_units name: A name for the operation (optional). If not specified "xw_plus_b" is used.

Returns: A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def xw_plus_b_v1(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.xw_plus_b_v1, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.xw_plus_b_v1

Return

Applicative

Origial documentation for Builder.xw_plus_b_v1

def xw_plus_b_v1(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.xw_plus_b_v1 to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.xw_plus_b_v1

def xw_plus_b_v1(x, weights, biases, name=None)

Computes matmul(x, weights) + biases.

This is a deprecated version of that will soon be removed.

Args: x: a 2D tensor. Dimensions typically: batch, in_units weights: a 2D tensor. Dimensions typically: in_units, out_units biases: a 1D tensor. Dimensions: out_units name: A name for the operation (optional). If not specified "xw_plus_b_v1" is used.

Returns: A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def xw_plus_b_v1_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.xw_plus_b_v1_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.xw_plus_b_v1_layer

Return

Applicative

Origial documentation for Builder.xw_plus_b_v1_layer

def xw_plus_b_v1_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.xw_plus_b_v1, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.xw_plus_b_v1

def xw_plus_b_v1(x, weights, biases, name=None):

Computes matmul(x, weights) + biases.

This is a deprecated version of that will soon be removed.

Args: x: a 2D tensor. Dimensions typically: batch, in_units weights: a 2D tensor. Dimensions typically: in_units, out_units biases: a 1D tensor. Dimensions: out_units name: A name for the operation (optional). If not specified "xw_plus_b_v1" is used.

Returns: A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zero_fraction(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zero_fraction, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zero_fraction

Return

Applicative

Origial documentation for Builder.zero_fraction

def zero_fraction(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.nn.zero_fraction to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.nn.zero_fraction

def zero_fraction(value, name=None)

Returns the fraction of zeros in value.

If value is empty, the result is nan.

This is useful in summaries to measure and report sparsity. For example,

z = tf.Relu(...)
summ = tf.scalar_summary('sparsity', tf.nn.zero_fraction(z))

Args: value: A tensor of numeric type. name: A name for the operation (optional).

Returns: The fraction of zeros in value, with type float32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zero_fraction_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zero_fraction_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zero_fraction_layer

Return

Applicative

Origial documentation for Builder.zero_fraction_layer

def zero_fraction_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.nn.zero_fraction, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.nn.zero_fraction

def zero_fraction(value, name=None):

Returns the fraction of zeros in value.

If value is empty, the result is nan.

This is useful in summaries to measure and report sparsity. For example,

z = tf.Relu(...)
summ = tf.scalar_summary('sparsity', tf.nn.zero_fraction(z))

Args: value: A tensor of numeric type. name: A name for the operation (optional).

Returns: The fraction of zeros in value, with type float32.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeros(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeros, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeros

Return

Applicative

Origial documentation for Builder.zeros

def zeros(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.zeros to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.zeros

def zeros(shape, dtype=<dtype: 'float32'>, name=None)

Creates a tensor with all elements set to zero.

This operation returns a tensor of type dtype with shape shape and all elements set to zero.

For example:

python tf.zeros([3, 4], tf.int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]

Args: shape: Either a list of integers, or a 1-D Tensor of type int32. dtype: The type of an element in the resulting Tensor. name: A name for the operation (optional).

Returns: A Tensor with all elements set to zero.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeros_initializer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeros_initializer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeros_initializer

Return

Applicative

Origial documentation for Builder.zeros_initializer

def zeros_initializer(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.zeros_initializer to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.zeros_initializer

def zeros_initializer(shape, dtype=<dtype: 'float32'>, partition_info=None)

An adaptor for zeros() to match the Initializer spec.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeros_initializer_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeros_initializer_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeros_initializer_layer

Return

Applicative

Origial documentation for Builder.zeros_initializer_layer

def zeros_initializer_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.zeros_initializer, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.zeros_initializer

def zeros_initializer(shape, dtype=<dtype: 'float32'>, partition_info=None):

An adaptor for zeros() to match the Initializer spec.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeros_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeros_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeros_layer

Return

Applicative

Origial documentation for Builder.zeros_layer

def zeros_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.zeros, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.zeros

def zeros(shape, dtype=<dtype: 'float32'>, name=None):

Creates a tensor with all elements set to zero.

This operation returns a tensor of type dtype with shape shape and all elements set to zero.

For example:

python tf.zeros([3, 4], tf.int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]

Args: shape: Either a list of integers, or a 1-D Tensor of type int32. dtype: The type of an element in the resulting Tensor. name: A name for the operation (optional).

Returns: A Tensor with all elements set to zero.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeros_like(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeros_like, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeros_like

Return

Applicative

Origial documentation for Builder.zeros_like

def zeros_like(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.zeros_like to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.zeros_like

def zeros_like(tensor, dtype=None, name=None, optimize=True)

Creates a tensor with all elements set to zero.

Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to zero. Optionally, you can use dtype to specify a new type for the returned tensor.

For example:

```python

'tensor' is [[1, 2, 3], [4, 5, 6]]

tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]] ```

Args: tensor: A Tensor. dtype: A type for the returned Tensor. Must be float32, float64, int8, int16, int32, int64, uint8, complex64, or complex128. name: A name for the operation (optional). optimize: if true, attempt to statically determine the shape of 'tensor' and encode it as a constant.

Returns: A Tensor with all elements set to zero.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeros_like_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeros_like_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeros_like_layer

Return

Applicative

Origial documentation for Builder.zeros_like_layer

def zeros_like_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.zeros_like, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.zeros_like

def zeros_like(tensor, dtype=None, name=None, optimize=True):

Creates a tensor with all elements set to zero.

Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to zero. Optionally, you can use dtype to specify a new type for the returned tensor.

For example:

```python

'tensor' is [[1, 2, 3], [4, 5, 6]]

tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]] ```

Args: tensor: A Tensor. dtype: A type for the returned Tensor. Must be float32, float64, int8, int16, int32, int64, uint8, complex64, or complex128. name: A name for the operation (optional). optimize: if true, attempt to statically determine the shape of 'tensor' and encode it as a constant.

Returns: A Tensor with all elements set to zero.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeta(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeta, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeta

Return

Applicative

Origial documentation for Builder.zeta

def zeta(builder):

THIS METHOD IS AUTOMATICALLY GENERATED

@immutable

This method is a lifted version the function tf.zeta to work with tensorbuilder.core.builders.Builders. Instead of taking a Tensor as its first argument it takes a builder, the rest of the arguments are exactly the same.

Original Documentation for tf.zeta

def zeta(x, q, name=None)

Compute the Hurwitz zeta function \(\zeta(x, q)\).

The Hurwitz zeta function is defined as:

\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}

Args: x: A Tensor. Must be one of the following types: float32, float64. q: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)

def zeta_layer(

app, *args, **kwargs)

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .compose(Builder.zeta_layer, ...)

Arguments

  • All other *args and **kwargs are forwarded to Builder.zeta_layer

Return

Applicative

Origial documentation for Builder.zeta_layer

def zeta_layer(builder, size):

THIS METHOD IS AUTOMATICALLY GENERATED

Alias for .fully_connected(size, activation_fn = tf.zeta, ...)

Arguments

  • size: the size of the resulting layer
  • All other *args and **kwargs are forwarded to tf.contrib.layers.fully_connected

Return

Builder

Origial documentation for tf.zeta

def zeta(x, q, name=None):

Compute the Hurwitz zeta function \(\zeta(x, q)\).

The Hurwitz zeta function is defined as:

\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}

Args: x: A Tensor. Must be one of the following types: float32, float64. q: A Tensor. Must have the same type as x. name: A name for the operation (optional).

Returns: A Tensor. Has the same type as x.

def _method(app, *args, **kwargs):
    def _lambda(builder):
        g = getattr(builder, f.__name__)
        return g(*args, **kwargs)
    return app.compose(_lambda)