tensorbuilder.builder module
from phi.builder import Builder import inspect from tensordata import Data from phi import P class TensorBuilder(Builder): """docstring for TensorBuilder.""" def data(self, *args, **kwargs): return Data(*args, **kwargs) TensorBuilder.__core__ = [ name for name, f in inspect.getmembers(TensorBuilder, predicate=inspect.ismethod) ]
Module variables
var name
Functions
def f(
self, *args, **kwargs)
def data(self, *args, **kwargs): return Data(*args, **kwargs)
Classes
class TensorBuilder
docstring for TensorBuilder.
class TensorBuilder(Builder): """docstring for TensorBuilder.""" def data(self, *args, **kwargs): return Data(*args, **kwargs)
Ancestors (in MRO)
- TensorBuilder
- phi.builder.Builder
- phi.lambdas.Lambda
- phi.dsl.Function
- phi.dsl.Node
- __builtin__.object
Class variables
var If
var Ref
Instance variables
var Obj
var Read
var Rec
var Write
var layers
Methods
def __init__(
self, f)
def __init__(self, f): super(Lambda, self).__init__(f) self._f = f
def Assert(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.Assert(*args, **kwargs)
It accepts the same arguments as tensorflow.Assert
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.Assert(x1, *args, **kwargs)
is equivalent to
builder.Assert(*args, **kwargs)(x1)
tensorflow.Assert
Asserts that the given condition is true.
If condition
evaluates to false, print the list of tensors in data
.
summarize
determines how many entries of the tensors to print.
NOTE: To ensure that Assert executes, one usually attaches a dependency:
python
# Ensure maximum element of x is smaller or equal to 1
assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
x = tf.with_dependencies([assert_op], x)
Args: condition: The condition to evaluate. data: The tensors to print out when condition is false. summarize: Print this many entries of each tensor. name: A name for this operation (optional).
Returns:
assert_op: An Operation
that, when executed, raises a
tf.errors.InvalidArgumentError
if condition
is not true.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def Context(
cls, *args)
Builder Core. Also available as a global function as phi.Context
.
Returns the context object of the current dsl.With
statemente.
Arguments
- *args: By design
Context
accepts any number of arguments and completely ignores them.
This is a classmethod and it doesnt return a Builder
/Lambda
by design so it can be called directly:
from phi import P, Context, Obj def read_file(z): f = Context() return f.read() lines = P.Pipe( "text.txt", P.With( open, read_file, Obj.split("\n") ) )
Here we called Context
with no arguments to get the context back, however, since you can also give this function an argument (which it will ignore) it can be passed to the DSL so we can rewrite the previous as:
from phi import P, Context, Obj lines = P.Pipe( "text.txt", P.With( open, Context, # f Obj.read() Obj.split("\n") ) )
Context
yields an exception when used outside of a With
block.
Also see
phi.builder.Builder.Obj
- dsl
@classmethod def Context(cls, *args): """ ilder Core**. Also available as a global function as `phi.Context`. rns the context object of the current `dsl.With` statemente. guments** *args**: By design `Context` accepts any number of arguments and completely ignores them. is a classmethod and it doesnt return a `Builder`/`Lambda` by design so it can be called directly: from phi import P, Context, Obj def read_file(z): f = Context() return f.read() lines = P.Pipe( "text.txt", P.With( open, read_file, Obj.split("\\n") ) ) we called `Context` with no arguments to get the context back, however, since you can also give this function an argument (which it will ignore) it can be passed to the DSL so we can rewrite the previous as: from phi import P, Context, Obj lines = P.Pipe( "text.txt", P.With( open, Context, # f Obj.read() Obj.split("\\n") ) ) text` yields an exception when used outside of a `With` block. so see** hi.builder.Builder.Obj` sl](https://cgarciae.github.io/phi/dsl.m.html) """ if dsl.With.GLOBAL_CONTEXT is dsl._NO_VALUE: raise Exception("Cannot use 'Context' outside of a 'With' block") return dsl.With.GLOBAL_CONTEXT
def DoRegisterMethod(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True)
This method enables you to register any function fn
that takes an Applicative as its first argument as a method of the Builder class.
Arguments
fn
: a function that atleast takes an Applicative as its first argument.library_path
: the route of the librar from which this function was taken, used for documentation purposes.alias
: allows you to specify the name of the method, it will take the name of the function if itsNone
.doc
: the documentation for the method, ifNone
a predefied documentation will be generated based on the documentation offn
.
Return
None
Examples
@classmethod def DoRegisterMethod(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True): """ This method enables you to register any function `fn` that takes an Applicative as its first argument as a method of the Builder class. **Arguments** * `fn`: a function that atleast takes an Applicative as its first argument. * `library_path`: the route of the librar from which this function was taken, used for documentation purposes. * `alias`: allows you to specify the name of the method, it will take the name of the function if its `None`. * `doc`: the documentation for the method, if `None` a predefied documentation will be generated based on the documentation of `fn`. **Return** `None` **Examples** """ if wrapped: fn = functools.wraps(wrapped)(fn) fn_signature = utils.get_method_sig(fn) fn_docs = inspect.getdoc(fn) name = alias if alias else fn.__name__ original_name = fn.__name__ if wrapped else original_name if original_name else name fn.__name__ = name fn.__doc__ = doc if doc else (""" METHOD IS AUTOMATICALLY GENERATED builder.{1}(*args, **kwargs) ccepts the same arguments as `{3}.{0}`. """ + explanation + """ }.{0}** {2} """).format(original_name, name, fn_docs, library_path) if explain else fn_docs if name in cls.__core__: raise Exception("Can't add method '{0}' because its on __core__".format(name)) fn = method_type(fn) setattr(cls, name, fn)
def Make(
self, *code, **kwargs)
The Make
method takes an expression from the DSL and compiles it to a function.
Arguments
- *code: any expression from the DSL.
code
is implicitly atuple
since that is what Python gives you when you declare a Variadic Function, therefore, according to the rules of the DSL, all expressions inside ofcode
will be composed together. See Composition. - flatten = False: if
flatten
is True and the argument being returned by the compiled function is alist
it will instead return a flattened list. - _return_type = None: By default
Make
returns an object of the same class e.g.Builder
, however you can pass in a custom class that inherits fromBuilder
as the returned contianer. This is useful if the custom builder has specialized methods. - create_ref_context = True: determines if a reference manager should be created on compilation. See Compile.
- refs = True: external/default values for references passed during compilation. See Compile.
Examples
from phi import P def add1(x): return x + 1 def mul3(x): return x * 3 f = P.Make( add1, mul3 ) assert f(1) == 6
Here f
is equivalent to
def f(x): x = add1(x) x = mul3(x) return x
The previous example using lambdas to create the functions
from phi import P f = P.Make( P + 1, P * 3 ) assert f(1) == 6
Also see
def Make(self, *code, **kwargs): """ `Make` method takes an expression from the DSL and compiles it to a function. guments** *code**: any expression from the DSL.`code` is implicitly a `tuple` since that is what Python gives you when you declare a [Variadic Function](https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists), therefore, according to the rules of the DSL, all expressions inside of `code` will be composed together. See [Composition](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Composition). latten = False*: if `flatten` is True and the argument being returned by the compiled function is a `list` it will instead return a flattened list. return_type = None*: By default `Make` returns an object of the same class e.g. `Builder`, however you can pass in a custom class that inherits from `Builder` as the returned contianer. This is useful if the custom builder has specialized methods. reate_ref_context = True*: determines if a reference manager should be created on compilation. See [Compile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile). efs = True*: external/default values for references passed during compilation. See [Compile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile). amples** from phi import P def add1(x): return x + 1 def mul3(x): return x * 3 f = P.Make( add1, mul3 ) assert f(1) == 6 `f` is equivalent to f(x): x = add1(x) x = mul3(x) return x previous example using [lambdas](https://cgarciae.github.io/phi/lambdas.m.html) to create the functions from phi import P f = P.Make( P + 1, P * 3 ) assert f(1) == 6 so see** sl](https://cgarciae.github.io/phi/dsl.m.html) ompile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) ambdas](https://cgarciae.github.io/phi/lambdas.m.html) """ _return_type = kwargs.get('_return_type', None) flatten = kwargs.get('flatten', False) refs = kwargs.get('refs', {}) create_ref_context = kwargs.get('create_ref_context', True) # code = (self, code) if flatten: code = (code, lambda x: utils.flatten_list(x) if type(x) is list else x) f = dsl.Compile(code, refs, create_ref_context=create_ref_context) return self.__then__(f, _return_type=_return_type)
def NMake(
self, *args, **kwargs)
NMake
is shortcut for Make(..., create_ref_context=False)
, its full name should be NoCreateRefContextMake but its impractically long. Normally methods that compile DSL expressions like phi.builder.Builder.Make
or phi.builder.Builder.Pipe
create a reference context unless especified, these contexts encapsulate references (see read or write) and prevent them from leaking, which is good. There are times however when you consciously want a sub-Make or sub-Pipe expression to read or write references from the main Make or Pipe expression, for this you need to set create_ref_context
to False
.
Arguments
- Same arguments as
phi.builder.Builder.Make
but... - create_ref_context is hardcoded to
False
Examples
If you compile a sub expression as a function for another expression e.g.
from phi import P assert 1 == P.Pipe( 1, {'s'}, # write s == 1, outer context P.Make( P + 1, {'s'} # write s == 2, inner context ), 's' # read s == 1, outer context )
you find that references are not shared. However if you avoid the creation of a new reference context via a keyword arguments
from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context P.Make( P + 1, {'s'}, #write s == 2, same context create_ref_context=False ), 's' # read s == 2, same context )
you can achieve what you want. Yet writting create_ref_context=False
is a little cumbersome, so to make things nicer we just use a shortcut by appending an N
at the beggining of the NMake
method
from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context P.NMake( P + 1, {'s'} #write s == 2, same context ), 's' # read s == 2, same context )
Also see
def NMake(self, *args, **kwargs): """ ke` is shortcut for `Make(..., create_ref_context=False)`, its full name should be *NoCreateRefContextMake* but its impractically long. Normally methods that [compile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) DSL expressions like `phi.builder.Builder.Make` or `phi.builder.Builder.Pipe` create a reference context unless especified, these contexts encapsulate references (see [read](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Read) or [write](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Write)) and prevent them from leaking, which is good. There are times however when you consciously want a sub-Make or sub-Pipe expression to read or write references from the main Make or Pipe expression, for this you need to set `create_ref_context` to `False`. guments** me arguments as `phi.builder.Builder.Make` but... create_ref_context** is hardcoded to `False` amples** ou compile a sub expression as a function for another expression e.g. from phi import P assert 1 == P.Pipe( 1, {'s'}, # write s == 1, outer context P.Make( P + 1, {'s'} # write s == 2, inner context ), 's' # read s == 1, outer context ) find that references are not shared. However if you avoid the creation of a new reference context via a keyword arguments from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context P.Make( P + 1, {'s'}, #write s == 2, same context create_ref_context=False ), 's' # read s == 2, same context ) can achieve what you want. Yet writting `create_ref_context=False` is a little cumbersome, so to make things nicer we just use a shortcut by appending an `N` at the beggining of the `NMake` method from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context P.NMake( P + 1, {'s'} #write s == 2, same context ), 's' # read s == 2, same context ) so see** hi.builder.Builder.Make` hi.builder.Builder.NPipe` hi.builder.Builder.NRun` sl](https://cgarciae.github.io/phi/dsl.m.html) ompile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) """ kwargs['create_ref_context'] = False return self.Make(*args, **kwargs)
def NPipe(
self, x, *code, **kwargs)
NPipe
is shortcut for Pipe(..., create_ref_context=False)
, its full name should be NoCreateRefContextPipe but its impractically long. Normally methods that compile DSL expressions like phi.builder.Builder.Make
or phi.builder.Builder.Pipe
create a reference context unless especified, these contexts encapsulate references (see read or write) and prevent them from leaking, which is good. There are times however when you consciously want a sub-Make or sub-Pipe expression to read or write references from the main Make or Pipe expression, for this you need to set create_ref_context
to False
.
Arguments
- Same arguments as
phi.builder.Builder.Pipe
but... - create_ref_context is hardcoded to
False
Examples
If you compile a sub expression as a function for another expression e.g.
from phi import P assert 1 == P.Pipe( 1, {'s'}, # write s == 1, outer context lambda x: P.Pipe( x, P + 1, {'s'} # write s == 2, inner context ), 's' # read s == 1, outer context )
you find that references are not shared. However if you avoid the creation of a new reference context via a keyword arguments
from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context lambda x: P.Pipe( x, P + 1, {'s'}, #write s == 2, same context create_ref_context=False ), 's' # read s == 2, same context )
you can achieve what you want. Yet writting create_ref_context=False
is a little cumbersome, so to make things nicer we just use a shortcut by appending an N
at the beggining of the NPipe
method
from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context lambda x: P.NPipe( x, P + 1, {'s'} #write s == 2, same context ), 's' # read s == 2, same context )
Also see
def NPipe(self, x, *code, **kwargs): """ pe` is shortcut for `Pipe(..., create_ref_context=False)`, its full name should be *NoCreateRefContextPipe* but its impractically long. Normally methods that [compile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) DSL expressions like `phi.builder.Builder.Make` or `phi.builder.Builder.Pipe` create a reference context unless especified, these contexts encapsulate references (see [read](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Read) or [write](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Write)) and prevent them from leaking, which is good. There are times however when you consciously want a sub-Make or sub-Pipe expression to read or write references from the main Make or Pipe expression, for this you need to set `create_ref_context` to `False`. guments** me arguments as `phi.builder.Builder.Pipe` but... create_ref_context** is hardcoded to `False` amples** ou compile a sub expression as a function for another expression e.g. from phi import P assert 1 == P.Pipe( 1, {'s'}, # write s == 1, outer context lambda x: P.Pipe( x, P + 1, {'s'} # write s == 2, inner context ), 's' # read s == 1, outer context ) find that references are not shared. However if you avoid the creation of a new reference context via a keyword arguments from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context lambda x: P.Pipe( x, P + 1, {'s'}, #write s == 2, same context create_ref_context=False ), 's' # read s == 2, same context ) can achieve what you want. Yet writting `create_ref_context=False` is a little cumbersome, so to make things nicer we just use a shortcut by appending an `N` at the beggining of the `NPipe` method from phi import P assert 2 == P.Pipe( 1, {'s'}, #write s == 1, same context lambda x: P.NPipe( x, P + 1, {'s'} #write s == 2, same context ), 's' # read s == 2, same context ) so see** hi.builder.Builder.Pipe` hi.builder.Builder.NMake` hi.builder.Builder.NRun` sl](https://cgarciae.github.io/phi/dsl.m.html) ompile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) """ return self.NMake(*code, **kwargs)(x)
def NRun(
self, *code, **kwargs)
NRun
is shortcut for Run(..., create_ref_context=False)
, its full name should be NoCreateRefContextRun but its impractically long.
Also see
def NRun(self, *code, **kwargs): """ n` is shortcut for `Run(..., create_ref_context=False)`, its full name should be *NoCreateRefContextRun* but its impractically long. so see** hi.builder.Builder.Run` hi.builder.Builder.NMake` hi.builder.Builder.NPipe` sl](https://cgarciae.github.io/phi/dsl.m.html) ompile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) """ return self.NPipe(None, *code, **kwargs)
def NotDifferentiable(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.NotDifferentiable(*args, **kwargs)
It accepts the same arguments as tensorflow.NotDifferentiable
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.NotDifferentiable(x1, *args, **kwargs)
is equivalent to
builder.NotDifferentiable(*args, **kwargs)(x1)
tensorflow.NotDifferentiable
Specifies that ops of type `op_type` is not differentiable.
This function should not be used for operations that have a well-defined gradient that is not yet implemented.
This function is only used when defining a new op type. It may be
used for ops such as tf.size()
that are not differentiable. For
example:
python
tf.NotDifferentiable("Size")
The gradient computed for 'op_type' will then propagate zeros.
For ops that have a well-defined gradient but are not yet implemented, no declaration should be made, and an error must be thrown if an attempt to request its gradient is made.
Args:
op_type: The string type of an operation. This corresponds to the
OpDef.name
field for the proto that defines the operation.
Raises:
TypeError: If op_type
is not a string.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def Pipe(
self, x, *code, **kwargs)
Pipe
is method that takes an input argument plus an expression from the DSL, it compiles the expression and applies the resulting function to the input. Its highly inspired by Elixir's |> (pipe) operator.
Arguments
- x: any input object
- *code: any expression from the DSL.
code
is implicitly atuple
since that is what Python gives you when you declare a Variadic Function, therefore, according to the rules of the DSL, all expressions inside ofcode
will be composed together. See Composition. - **kwargs:
Pipe
forwards allkwargs
tophi.builder.Builder.Make
, visit its documentation for more info.
Examples
from phi import P def add1(x): return x + 1 def mul3(x): return x * 3 x = P.Pipe( 1, #input add1, #1 + 1 == 2 mul3 #2 * 3 == 6 ) assert x == 6
The previous using lambdas to create the functions
from phi import P x = P.Pipe( 1, #input P + 1, #1 + 1 == 2 P * 3 #2 * 3 == 6 ) assert x == 6
Also see
def Pipe(self, x, *code, **kwargs): """ e` is method that takes an input argument plus an expression from the DSL, it compiles the expression and applies the resulting function to the input. Its highly inspired by Elixir's [|> (pipe)](https://hexdocs.pm/elixir/Kernel.html#%7C%3E/2) operator. guments** x**: any input object *code**: any expression from the DSL.`code` is implicitly a `tuple` since that is what Python gives you when you declare a [Variadic Function](https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists), therefore, according to the rules of the DSL, all expressions inside of `code` will be composed together. See [Composition](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Composition). **kwargs**: `Pipe` forwards all `kwargs` to `phi.builder.Builder.Make`, visit its documentation for more info. amples** from phi import P def add1(x): return x + 1 def mul3(x): return x * 3 x = P.Pipe( 1, #input add1, #1 + 1 == 2 mul3 #2 * 3 == 6 ) assert x == 6 previous using [lambdas](https://cgarciae.github.io/phi/lambdas.m.html) to create the functions from phi import P x = P.Pipe( 1, #input P + 1, #1 + 1 == 2 P * 3 #2 * 3 == 6 ) assert x == 6 so see** hi.builder.Builder.Make` hi.builder.Builder.Run` sl](https://cgarciae.github.io/phi/dsl.m.html) ompile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) ambdas](https://cgarciae.github.io/phi/lambdas.m.html) """ return self.Make(*code, **kwargs)(x)
def Print(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.Print(*args, **kwargs)
It accepts the same arguments as tensorflow.Print
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.Print(x1, *args, **kwargs)
is equivalent to
builder.Print(*args, **kwargs)(x1)
tensorflow.Print
Prints a list of tensors.
This is an identity op with the side effect of printing data
when
evaluating.
Args:
input_: A tensor passed through this op.
data: A list of tensors to print out when op is evaluated.
message: A string, prefix of the error message.
first_n: Only log first_n
number of times. Negative numbers log always;
this is the default.
summarize: Only print this many entries of each tensor. If None, then a
maximum of 3 elements are printed per input tensor.
name: A name for the operation (optional).
Returns:
Same tensor as input_
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def Register0(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def Register0(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): def register_decorator(fn): cls.RegisterFunction0(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type) return fn return register_decorator
def Register1(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def Register1(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): def register_decorator(fn): _wrapped = wrapped if wrapped else fn cls.RegisterFunction1(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=_wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type) return fn return register_decorator
def Register2(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def Register2(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): def register_decorator(fn): cls.RegisterFunction2(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type) return fn return register_decorator
def Register3(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def Register3(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): def register_decorator(fn): cls.RegisterFunction3(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type) return fn return register_decorator
def Register4(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def Register4(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): def register_decorator(fn): cls.RegisterFunction4(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type) return fn return register_decorator
def Register5(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def Register5(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): def register_decorator(fn): cls.RegisterFunction5(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type) return fn return register_decorator
def RegisterFunction0(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def RegisterFunction0(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): """ """ @functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs) explanation = """ ver, a partial with the arguments is returned which expects any argument `x` and complete ignores it, such that {3}.{0}(*args, **kwargs) quivalent to builder.{1}(*args, **kwargs)(x) """ + explanation if explain else "" cls.DoRegisterMethod(method, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain)
def RegisterFunction1(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
This method enables you to register any function fn
that takes an object as its first argument as a method of the Builder and Applicative class.
Arguments
fn
: a function that atleast takes an Object as its first argument.library_path
: the route of the librar from which this function was taken, used for documentation purposes.alias
: allows you to specify the name of the method, it will take the name of the function if itsNone
.doc
: the documentation for the method, ifNone
a predefied documentation will be generated based on the documentation offn
.
Return
None
Examples
@classmethod def RegisterFunction1(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): """ This method enables you to register any function `fn` that takes an object as its first argument as a method of the Builder and Applicative class. **Arguments** * `fn`: a function that atleast takes an Object as its first argument. * `library_path`: the route of the librar from which this function was taken, used for documentation purposes. * `alias`: allows you to specify the name of the method, it will take the name of the function if its `None`. * `doc`: the documentation for the method, if `None` a predefied documentation will be generated based on the documentation of `fn`. **Return** `None` **Examples** """ @functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs) explanation = """ ver, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that {3}.{0}(x1, *args, **kwargs) quivalent to builder.{1}(*args, **kwargs)(x1) """ + explanation if explain else "" cls.DoRegisterMethod(method, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain)
def RegisterFunction2(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def RegisterFunction2(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): """ """ @functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs) explanation = """ ver, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that {3}.{0}(x1, x2, *args, **kwargs) quivalent to builder.{1}(x1, *args, **kwargs)(x2) """ + explanation if explain else "" cls.DoRegisterMethod(method, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain)
def RegisterFunction3(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def RegisterFunction3(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): """ """ @functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then3(fn, *args, **kwargs) explanation = """ ver, the 3rd argument is omitted, a partial with the rest of the arguments is returned which expects the 3rd argument such that {3}.{0}(x1, x2, x3, *args, **kwargs) quivalent to builder.{1}(x1, x2, *args, **kwargs)(x3) """ + explanation if explain else "" cls.DoRegisterMethod(method, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain)
def RegisterFunction4(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def RegisterFunction4(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): """ """ @functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then4(fn, *args, **kwargs) explanation = """ ver, the 4th argument is omitted, a partial with the rest of the arguments is returned which expects the 4th argument such that {3}.{0}(x1, x2, x3, x4, *args, **kwargs) quivalent to builder.{1}(x1, x2, x3, *args, **kwargs)(x4) """ + explanation if explain else "" cls.DoRegisterMethod(method, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain)
def RegisterFunction5(
cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True, _return_type=None)
@classmethod def RegisterFunction5(cls, fn, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True, _return_type=None): """ """ @functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then5(fn, *args, **kwargs) explanation = """ ver, the 5th argument is omitted, a partial with the rest of the arguments is returned which expects the 5th argument such that {3}.{0}(x1, x2, x3, x4, x5, *args, **kwargs) quivalent to builder.{1}(x1, x2, x3, x4, *args, **kwargs)(x5) """ + explanation if explain else "" cls.DoRegisterMethod(method, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain, _return_type=_return_type)
def RegisterMethod(
cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation='', method_type=<function identity at 0x7fe86c6f1848>, explain=True)
@classmethod def RegisterMethod(cls, library_path, alias=None, original_name=None, doc=None, wrapped=None, explanation="", method_type=utils.identity, explain=True): def register_decorator(fn): cls.DoRegisterMethod(fn, library_path, alias=alias, original_name=original_name, doc=doc, wrapped=wrapped, explanation=explanation, method_type=method_type, explain=explain) return fn return register_decorator
def Run(
self, *code, **kwargs)
Run(*code, **kwargs)
is equivalent to Pipe(None, *code, **kwargs)
, that is, it compiles the code and applies in a None
value.
Arguments
- Same as
phi.builder.Builder.Make
.
Examples
You might create code that totally ignores its input argument e.g.
from phi import P result = P.Pipe( None, dict( x = ( Val(10), P + 1 ), y = ( Val(5), P * 5 ) ) ) assert result.x == 9 assert result.y == 25
Here the Val
statemente drops the None
and introduces its own constants. Given this its more suitable to use Run
from phi import P result = P.Run( dict( x = ( Val(10), P + 1 ), y = ( Val(5), P * 5 ) ) ) assert result.x == 9 assert result.y == 25
Also see
def Run(self, *code, **kwargs): """ (*code, **kwargs)` is equivalent to `Pipe(None, *code, **kwargs)`, that is, it compiles the code and applies in a `None` value. guments** me as `phi.builder.Builder.Make`. amples** might create code that totally ignores its input argument e.g. from phi import P result = P.Pipe( None, dict( x = ( Val(10), P + 1 ), y = ( Val(5), P * 5 ) ) ) assert result.x == 9 assert result.y == 25 the `Val` statemente drops the `None` and introduces its own constants. Given this its more suitable to use `Run` from phi import P result = P.Run( dict( x = ( Val(10), P + 1 ), y = ( Val(5), P * 5 ) ) ) assert result.x == 9 assert result.y == 25 so see** hi.builder.Builder.Make` hi.builder.Builder.Pipe` sl](https://cgarciae.github.io/phi/dsl.m.html) ompile](https://cgarciae.github.io/phi/dsl.m.html#phi.dsl.Compile) """ return self.Pipe(None, *code, **kwargs)
def Then(
self, expr, *args, **kwargs)
def Then(self, expr, *args, **kwargs): """ """ return self.ThenAt(0, expr, *args, **kwargs)
def Then0(
self, expr, *args, **kwargs)
def Then0(self, expr, *args, **kwargs): """ """ return self.ThenAt(-1, expr, *args, **kwargs)
def Then1(
self, expr, *args, **kwargs)
def Then(self, expr, *args, **kwargs): """ """ return self.ThenAt(0, expr, *args, **kwargs)
def Then2(
self, expr, arg1, *args, **kwargs)
def Then2(self, expr, arg1, *args, **kwargs): """ """ args = (arg1,) + args return self.ThenAt(1, expr, *args, **kwargs)
def Then3(
self, expr, arg1, arg2, *args, **kwargs)
def Then3(self, expr, arg1, arg2, *args, **kwargs): """ """ args = (arg1, arg2) + args return self.ThenAt(2, expr, *args, **kwargs)
def Then4(
self, expr, arg1, arg2, arg3, *args, **kwargs)
def Then4(self, expr, arg1, arg2, arg3, *args, **kwargs): """ """ args = (arg1, arg2, arg3) + args return self.ThenAt(3, expr, *args, **kwargs)
def Then5(
self, expr, arg1, arg2, arg3, arg4, *args, **kwargs)
def Then5(self, expr, arg1, arg2, arg3, arg4, *args, **kwargs): """ """ args = (arg1, arg2, arg3, arg4) + args return self.ThenAt(4, expr, *args, **kwargs)
def ThenAt(
self, n, expr, *args, **kwargs)
def ThenAt(self, n, expr, *args, **kwargs): _return_type = None if '_return_type' in kwargs: _return_type = kwargs['_return_type'] del kwargs['_return_type'] def _lambda(x): x = self(x) new_args = args[0:n] + (x,) + args[n:] if n >= 0 else args return expr(*new_args, **kwargs) return self.__unit__(_lambda, _return_type=_return_type)
def Val(
self, x)
def Val(self, x): """ """ return self.__then__(lambda z: x)
def With(
self, *args, **kwargs)
With
def With(context_manager, *body):
Arguments
- context_manager: a context manager object or valid expression from the DSL that returns a context manager.
- *body: any valid expression of the DSL to be evaluated inside the context.
*body
is interpreted as a tuple so all expression contained are composed.
As with normal python programs you sometimes might want to create a context for a block of code. You normally give a context manager to the with statemente, in Phi you use P.With
or phi.With
Context
Python's with
statemente returns a context object through as
keyword, in the DSL this object can be obtained using the P.Context
method or the phi.Context
function.
Examples
from phi import P, Obj, Context, With, Pipe text = Pipe( "text.txt", With( open, Context, Obj.read() ) )
The previous is equivalent to
with open("text.txt") as f: text = f.read()
def With(self, *args, **kwargs): return self.NMake(dsl.With(*args, **kwargs))
def abs(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.abs(*args, **kwargs)
It accepts the same arguments as tensorflow.abs
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.abs(x1, *args, **kwargs)
is equivalent to
builder.abs(*args, **kwargs)(x1)
tensorflow.abs
Computes the absolute value of a tensor.
Given a tensor of real numbers x
, this operation returns a tensor
containing the absolute value of each element in x
. For example, if x is
an input element and y is an output element, this operation computes
\(y = |x|\).
See tf.complex_abs()
to compute the absolute value of a complex
number.
Args:
x: A Tensor
or SparseTensor
of type float32
, float64
, int32
, or
int64
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
the same size and type as x
with absolute
values.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def accumulate_n(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.accumulate_n(*args, **kwargs)
It accepts the same arguments as tensorflow.accumulate_n
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.accumulate_n(x1, *args, **kwargs)
is equivalent to
builder.accumulate_n(*args, **kwargs)(x1)
tensorflow.accumulate_n
Returns the element-wise sum of a list of tensors.
Optionally, pass shape
and tensor_dtype
for shape and type checking,
otherwise, these are inferred.
NOTE: This operation is not differentiable and cannot be used if inputs depend on trainable variables. Please use tf.add_n for such cases.
For example:
```python
tensor 'a' is [[1, 2], [3, 4]]
tensor b
is [[5, 0], [0, 6]]
tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
Explicitly pass shape and type
tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) ==> [[7, 4], [6, 14]] ```
Args:
inputs: A list of Tensor
objects, each with same shape and type.
shape: Shape of elements of inputs
.
tensor_dtype: The type of inputs
.
name: A name for the operation (optional).
Returns:
A Tensor
of same shape and type as the elements of inputs
.
Raises:
ValueError: If inputs
don't all have same shape and dtype or the shape
cannot be inferred.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def acos(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.acos(*args, **kwargs)
It accepts the same arguments as tensorflow.acos
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.acos(x1, *args, **kwargs)
is equivalent to
builder.acos(*args, **kwargs)(x1)
tensorflow.acos
Computes acos of x element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def add(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.add(*args, **kwargs)
It accepts the same arguments as tensorflow.add
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.add(x1, *args, **kwargs)
is equivalent to
builder.add(*args, **kwargs)(x1)
tensorflow.add
Returns x + y element-wise.
NOTE: Add
supports broadcasting. AddN
does not. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, uint8
, int8
, int16
, int32
, int64
, complex64
, complex128
, string
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def add_check_numerics_ops(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.add_check_numerics_ops(*args, **kwargs)
It accepts the same arguments as tensorflow.add_check_numerics_ops
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.add_check_numerics_ops(x1, *args, **kwargs)
is equivalent to
builder.add_check_numerics_ops(*args, **kwargs)(x1)
tensorflow.add_check_numerics_ops
Connect a `check_numerics` to every floating point tensor.
check_numerics
operations themselves are added for each half
, float
,
or double
tensor in the graph. For all ops in the graph, the
check_numerics
op for all of its (half
, float
, or double
) inputs
is guaranteed to run before the check_numerics
op on any of its outputs.
Returns:
A group
op depending on all check_numerics
ops added.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def add_n(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.add_n(*args, **kwargs)
It accepts the same arguments as tensorflow.add_n
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.add_n(x1, *args, **kwargs)
is equivalent to
builder.add_n(*args, **kwargs)(x1)
tensorflow.add_n
Adds all input tensors element-wise.
Args:
inputs: A list of Tensor
objects, each with same shape and type.
name: A name for the operation (optional).
Returns:
A Tensor
of same shape and type as the elements of inputs
.
Raises:
ValueError: If inputs
don't all have same shape and dtype or the shape
cannot be inferred.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def add_regularization_loss(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.add_regularization_loss(*args, **kwargs)
It accepts the same arguments as tb.add_regularization_loss
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.add_regularization_loss(x1, *args, **kwargs)
is equivalent to
builder.add_regularization_loss(*args, **kwargs)(x1)
tb.add_regularization_loss
None
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def add_to_collection(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.add_to_collection(*args, **kwargs)
It accepts the same arguments as tensorflow.add_to_collection
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.add_to_collection(x1, *args, **kwargs)
is equivalent to
builder.add_to_collection(*args, **kwargs)(x1)
tensorflow.add_to_collection
Wrapper for `Graph.add_to_collection()` using the default graph.
See Graph.add_to_collection()
for more details.
Args:
name: The key for the collection. For example, the GraphKeys
class
contains many standard names for collections.
value: The value to add to the collection.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def all_candidate_sampler(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.all_candidate_sampler(*args, **kwargs)
It accepts the same arguments as tf.nn.all_candidate_sampler
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.all_candidate_sampler(x1, *args, **kwargs)
is equivalent to
builder.all_candidate_sampler(*args, **kwargs)(x1)
tf.nn.all_candidate_sampler
Generate the set of all classes.
Deterministically generates and returns the set of all possible classes. For testing purposes. There is no need to use this, since you might as well use full softmax or full logistic regression.
Args:
true_classes: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
num_true: An int
. The number of target classes per training example.
num_sampled: An int
. The number of possible classes.
unique: A bool
. Ignored.
unique.
seed: An int
. An operation-specific seed. Default is 0.
name: A name for the operation (optional).
Returns:
sampled_candidates: A tensor of type int64
and shape [num_sampled]
.
This operation deterministically returns the entire range
[0, num_sampled]
.
true_expected_count: A tensor of type float
. Same shape as
true_classes
. The expected counts under the sampling distribution
of each of true_classes
. All returned values are 1.0.
sampled_expected_count: A tensor of type float
. Same shape as
sampled_candidates
. The expected counts under the sampling distribution
of each of sampled_candidates
. All returned values are 1.0.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def all_candidate_sampler_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.all_candidate_sampler_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.all_candidate_sampler_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.all_candidate_sampler`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def all_candidate_sampler_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.all_candidate_sampler_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.all_candidate_sampler_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.all_candidate_sampler`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def all_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.all_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.all_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.all_variables(x1, *args, **kwargs)
is equivalent to
builder.all_variables(*args, **kwargs)(x1)
tensorflow.all_variables
Returns all variables that must be saved/restored.
The Variable()
constructor automatically adds new variables to the graph
collection GraphKeys.VARIABLES
. This convenience function returns the
contents of that collection.
Returns:
A list of Variable
objects.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def arg_max(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.arg_max(*args, **kwargs)
It accepts the same arguments as tensorflow.arg_max
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.arg_max(x1, *args, **kwargs)
is equivalent to
builder.arg_max(*args, **kwargs)(x1)
tensorflow.arg_max
Returns the index with the largest value across dimensions of a tensor.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
dimension: A Tensor
. Must be one of the following types: int32
, int64
.
int32, 0 <= dimension < rank(input). Describes which dimension
of the input Tensor to reduce across. For vectors, use dimension = 0.
name: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def arg_min(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.arg_min(*args, **kwargs)
It accepts the same arguments as tensorflow.arg_min
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.arg_min(x1, *args, **kwargs)
is equivalent to
builder.arg_min(*args, **kwargs)(x1)
tensorflow.arg_min
Returns the index with the smallest value across dimensions of a tensor.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
dimension: A Tensor
. Must be one of the following types: int32
, int64
.
int32, 0 <= dimension < rank(input). Describes which dimension
of the input Tensor to reduce across. For vectors, use dimension = 0.
name: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def as_dtype(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.as_dtype(*args, **kwargs)
It accepts the same arguments as tensorflow.as_dtype
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.as_dtype(x1, *args, **kwargs)
is equivalent to
builder.as_dtype(*args, **kwargs)(x1)
tensorflow.as_dtype
Converts the given `type_value` to a `DType`.
Args:
type_value: A value that can be converted to a tf.DType
object. This may currently be a tf.DType
object, a
DataType
enum,
a string type name, or a numpy.dtype
.
Returns:
A DType
corresponding to type_value
.
Raises:
TypeError: If type_value
cannot be converted to a DType
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def as_string(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.as_string(*args, **kwargs)
It accepts the same arguments as tensorflow.as_string
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.as_string(x1, *args, **kwargs)
is equivalent to
builder.as_string(*args, **kwargs)(x1)
tensorflow.as_string
Converts each entry in the given tensor to strings. Supports many numeric
types and boolean.
Args:
input: A Tensor
. Must be one of the following types: int32
, int64
, complex64
, float32
, float64
, bool
, int8
.
precision: An optional int
. Defaults to -1
.
The post-decimal precision to use for floating point numbers.
Only used if precision > -1.
scientific: An optional bool
. Defaults to False
.
Use scientific notation for floating point numbers.
shortest: An optional bool
. Defaults to False
.
Use shortest representation (either scientific or standard) for
floating point numbers.
width: An optional int
. Defaults to -1
.
Pad pre-decimal numbers to this width.
Applies to both floating point and integer numbers.
Only used if width > -1.
fill: An optional string
. Defaults to ""
.
The value to pad if width > -1. If empty, pads with spaces.
Another typical value is '0'. String cannot be longer than 1 character.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def asin(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.asin(*args, **kwargs)
It accepts the same arguments as tensorflow.asin
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.asin(x1, *args, **kwargs)
is equivalent to
builder.asin(*args, **kwargs)(x1)
tensorflow.asin
Computes asin of x element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_equal(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_equal(x1, *args, **kwargs)
is equivalent to
builder.assert_equal(*args, **kwargs)(x1)
tensorflow.assert_equal
Assert the condition `x == y` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_equal(x, y)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_equal(x, y)], x)
This condition holds if for every pair of (possibly broadcast) elements
x[i]
, y[i]
, we have x[i] == y[i]
.
If both x
and y
are empty, this is trivially satisfied.
Args:
x: Numeric Tensor
.
y: Numeric Tensor
, same dtype as and broadcastable to x
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
, y
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_equal".
Returns:
Op that raises InvalidArgumentError
if x == y
is False.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_greater(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_greater(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_greater
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_greater(x1, *args, **kwargs)
is equivalent to
builder.assert_greater(*args, **kwargs)(x1)
tensorflow.assert_greater
Assert the condition `x > y` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_greater(x, y)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_greater(x, y)], x)
This condition holds if for every pair of (possibly broadcast) elements
x[i]
, y[i]
, we have x[i] > y[i]
.
If both x
and y
are empty, this is trivially satisfied.
Args:
x: Numeric Tensor
.
y: Numeric Tensor
, same dtype as and broadcastable to x
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
, y
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_greater".
Returns:
Op that raises InvalidArgumentError
if x > y
is False.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_greater_equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_greater_equal(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_greater_equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_greater_equal(x1, *args, **kwargs)
is equivalent to
builder.assert_greater_equal(*args, **kwargs)(x1)
tensorflow.assert_greater_equal
Assert the condition `x >= y` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_greater_equal(x, y)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_greater_equal(x, y)], x)
This condition holds if for every pair of (possibly broadcast) elements
x[i]
, y[i]
, we have x[i] >= y[i]
.
If both x
and y
are empty, this is trivially satisfied.
Args:
x: Numeric Tensor
.
y: Numeric Tensor
, same dtype as and broadcastable to x
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
, y
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to
"assert_greater_equal"
Returns:
Op that raises InvalidArgumentError
if x >= y
is False.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_integer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_integer(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_integer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_integer(x1, *args, **kwargs)
is equivalent to
builder.assert_integer(*args, **kwargs)(x1)
tensorflow.assert_integer
Assert that `x` is of integer dtype.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_integer(x)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_integer(x)], x)
Args:
x: Tensor
whose basetype is integer and is not quantized.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_integer".
Raises:
TypeError: If x.dtype
is anything other than non-quantized integer.
Returns:
A no_op
that does nothing. Type can be determined statically.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_less(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_less(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_less
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_less(x1, *args, **kwargs)
is equivalent to
builder.assert_less(*args, **kwargs)(x1)
tensorflow.assert_less
Assert the condition `x < y` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_less(x, y)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_less(x, y)], x)
This condition holds if for every pair of (possibly broadcast) elements
x[i]
, y[i]
, we have x[i] < y[i]
.
If both x
and y
are empty, this is trivially satisfied.
Args:
x: Numeric Tensor
.
y: Numeric Tensor
, same dtype as and broadcastable to x
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
, y
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_less".
Returns:
Op that raises InvalidArgumentError
if x < y
is False.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_less_equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_less_equal(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_less_equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_less_equal(x1, *args, **kwargs)
is equivalent to
builder.assert_less_equal(*args, **kwargs)(x1)
tensorflow.assert_less_equal
Assert the condition `x <= y` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_less_equal(x, y)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_less_equal(x, y)], x)
This condition holds if for every pair of (possibly broadcast) elements
x[i]
, y[i]
, we have x[i] <= y[i]
.
If both x
and y
are empty, this is trivially satisfied.
Args:
x: Numeric Tensor
.
y: Numeric Tensor
, same dtype as and broadcastable to x
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
, y
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_less_equal"
Returns:
Op that raises InvalidArgumentError
if x <= y
is False.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_negative(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_negative(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_negative
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_negative(x1, *args, **kwargs)
is equivalent to
builder.assert_negative(*args, **kwargs)(x1)
tensorflow.assert_negative
Assert the condition `x < 0` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_negative(x)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_negative(x)], x)
Negative means, for every element x[i]
of x
, we have x[i] < 0
.
If x
is empty this is trivially satisfied.
Args:
x: Numeric Tensor
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_negative".
Returns:
Op raising InvalidArgumentError
unless x
is all negative.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_non_negative(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_non_negative(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_non_negative
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_non_negative(x1, *args, **kwargs)
is equivalent to
builder.assert_non_negative(*args, **kwargs)(x1)
tensorflow.assert_non_negative
Assert the condition `x >= 0` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_non_negative(x)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_non_negative(x)], x)
Non-negative means, for every element x[i]
of x
, we have x[i] >= 0
.
If x
is empty this is trivially satisfied.
Args:
x: Numeric Tensor
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional).
Defaults to "assert_non_negative".
Returns:
Op raising InvalidArgumentError
unless x
is all non-negative.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_non_positive(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_non_positive(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_non_positive
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_non_positive(x1, *args, **kwargs)
is equivalent to
builder.assert_non_positive(*args, **kwargs)(x1)
tensorflow.assert_non_positive
Assert the condition `x <= 0` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_non_positive(x)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_non_positive(x)], x)
Non-positive means, for every element x[i]
of x
, we have x[i] <= 0
.
If x
is empty this is trivially satisfied.
Args:
x: Numeric Tensor
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional).
Defaults to "assert_non_positive".
Returns:
Op raising InvalidArgumentError
unless x
is all non-positive.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_positive(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_positive(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_positive
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_positive(x1, *args, **kwargs)
is equivalent to
builder.assert_positive(*args, **kwargs)(x1)
tensorflow.assert_positive
Assert the condition `x > 0` holds element-wise.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_positive(x)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_positive(x)], x)
Positive means, for every element x[i]
of x
, we have x[i] > 0
.
If x
is empty this is trivially satisfied.
Args:
x: Numeric Tensor
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_positive".
Returns:
Op raising InvalidArgumentError
unless x
is all positive.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_proper_iterable(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_proper_iterable(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_proper_iterable
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_proper_iterable(x1, *args, **kwargs)
is equivalent to
builder.assert_proper_iterable(*args, **kwargs)(x1)
tensorflow.assert_proper_iterable
Static assert that values is a "proper" iterable.
Ops
that expect iterables of Tensor
can call this to validate input.
Useful since Tensor
, ndarray
, byte/text type are all iterables themselves.
Args: values: Object to be checked.
Raises:
TypeError: If values
is not iterable or is one of
Tensor
, SparseTensor
, np.array
, tf.compat.bytes_or_text_types
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_rank(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_rank(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_rank
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_rank(x1, *args, **kwargs)
is equivalent to
builder.assert_rank(*args, **kwargs)(x1)
tensorflow.assert_rank
Assert `x` has rank equal to `rank`.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_rank(x, 2)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_rank(x, 2)], x)
Args:
x: Numeric Tensor
.
rank: Scalar integer Tensor
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional). Defaults to "assert_rank".
Returns:
Op raising InvalidArgumentError
unless x
has specified rank.
If static checks determine x
has correct rank, a no_op
is returned.
Raises:
ValueError: If static checks determine x
has wrong rank.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_rank_at_least(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_rank_at_least(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_rank_at_least
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_rank_at_least(x1, *args, **kwargs)
is equivalent to
builder.assert_rank_at_least(*args, **kwargs)(x1)
tensorflow.assert_rank_at_least
Assert `x` has rank equal to `rank` or higher.
Example of adding a dependency to an operation:
python
with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]):
output = tf.reduce_sum(x)
Example of adding dependency to the tensor being checked:
python
x = tf.with_dependencies([tf.assert_rank_at_least(x, 2)], x)
Args:
x: Numeric Tensor
.
rank: Scalar Tensor
.
data: The tensors to print out if the condition is False. Defaults to
error message and first few entries of x
.
summarize: Print this many entries of each tensor.
message: A string to prefix to the default message.
name: A name for this operation (optional).
Defaults to "assert_rank_at_least".
Returns:
Op raising InvalidArgumentError
unless x
has specified rank or higher.
If static checks determine x
has correct rank, a no_op
is returned.
Raises:
ValueError: If static checks determine x
has wrong rank.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_type(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_type(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_type
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_type(x1, *args, **kwargs)
is equivalent to
builder.assert_type(*args, **kwargs)(x1)
tensorflow.assert_type
Statically asserts that the given `Tensor` is of the specified type.
Args:
tensor: A tensorflow Tensor
.
tf_type: A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc).
message: A string to prefix to the default message.
name: A name to give this Op
. Defaults to "assert_type"
Raises: TypeError: If the tensors data type doesn't match tf_type.
Returns:
A no_op
that does nothing. Type can be determined statically.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assert_variables_initialized(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assert_variables_initialized(*args, **kwargs)
It accepts the same arguments as tensorflow.assert_variables_initialized
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assert_variables_initialized(x1, *args, **kwargs)
is equivalent to
builder.assert_variables_initialized(*args, **kwargs)(x1)
tensorflow.assert_variables_initialized
Returns an Op to check if variables are initialized.
NOTE: This function is obsolete and will be removed in 6 months. Please
change your implementation to use report_uninitialized_variables()
.
When run, the returned Op will raise the exception FailedPreconditionError
if any of the variables has not yet been initialized.
Note: This function is implemented by trying to fetch the values of the variables. If one of the variables is not initialized a message may be logged by the C++ runtime. This is expected.
Args:
var_list: List of Variable
objects to check. Defaults to the
value of all_variables().
Returns: An Op, or None if there are no variables.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assign(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assign(*args, **kwargs)
It accepts the same arguments as tensorflow.assign
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assign(x1, *args, **kwargs)
is equivalent to
builder.assign(*args, **kwargs)(x1)
tensorflow.assign
Update 'ref' by assigning 'value' to it.
This operation outputs "ref" after the assignment is done. This makes it easier to chain operations that need to use the reset value.
Args:
ref: A mutable Tensor
.
Should be from a Variable
node. May be uninitialized.
value: A Tensor
. Must have the same type as ref
.
The value to be assigned to the variable.
validate_shape: An optional bool
. Defaults to True
.
If true, the operation will validate that the shape
of 'value' matches the shape of the Tensor being assigned to. If false,
'ref' will take on the shape of 'value'.
use_locking: An optional bool
. Defaults to True
.
If True, the assignment will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been reset.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assign_add(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assign_add(*args, **kwargs)
It accepts the same arguments as tensorflow.assign_add
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assign_add(x1, *args, **kwargs)
is equivalent to
builder.assign_add(*args, **kwargs)(x1)
tensorflow.assign_add
Update 'ref' by adding 'value' to it.
This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.
Args:
ref: A mutable Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Should be from a Variable
node.
value: A Tensor
. Must have the same type as ref
.
The value to be added to the variable.
use_locking: An optional bool
. Defaults to False
.
If True, the addition will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def assign_sub(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.assign_sub(*args, **kwargs)
It accepts the same arguments as tensorflow.assign_sub
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.assign_sub(x1, *args, **kwargs)
is equivalent to
builder.assign_sub(*args, **kwargs)(x1)
tensorflow.assign_sub
Update 'ref' by subtracting 'value' from it.
This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value.
Args:
ref: A mutable Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Should be from a Variable
node.
value: A Tensor
. Must have the same type as ref
.
The value to be subtracted to the variable.
use_locking: An optional bool
. Defaults to False
.
If True, the subtraction will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns: Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def atan(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.atan(*args, **kwargs)
It accepts the same arguments as tensorflow.atan
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.atan(x1, *args, **kwargs)
is equivalent to
builder.atan(*args, **kwargs)(x1)
tensorflow.atan
Computes atan of x element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def atrous_conv2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.atrous_conv2d(*args, **kwargs)
It accepts the same arguments as tf.nn.atrous_conv2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.atrous_conv2d(x1, *args, **kwargs)
is equivalent to
builder.atrous_conv2d(*args, **kwargs)(x1)
tf.nn.atrous_conv2d
Atrous convolution (a.k.a. convolution with holes or dilated convolution).
Computes a 2-D atrous convolution, also known as convolution with holes or
dilated convolution, given 4-D value
and filters
tensors. If the rate
parameter is equal to one, it performs regular 2-D convolution. If the rate
parameter is greater than one, it performs convolution with holes, sampling
the input values every rate
pixels in the height
and width
dimensions.
This is equivalent to convolving the input with a set of upsampled filters,
produced by inserting rate - 1
zeros between two consecutive values of the
filters along the height
and width
dimensions, hence the name atrous
convolution or convolution with holes (the French word trous means holes in
English).
More specifically:
output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] * value[b, i + rate * di, j + rate * dj, q]
Atrous convolution allows us to explicitly control how densely to compute
feature responses in fully convolutional networks. Used in conjunction with
bilinear interpolation, it offers an alternative to conv2d_transpose
in
dense prediction tasks such as semantic image segmentation, optical flow
computation, or depth estimation. It also allows us to effectively enlarge
the field of view of filters without increasing the number of parameters or
the amount of computation.
For a description of atrous convolution and how it can be used for dense feature extraction, please see: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. The same operation is investigated further in Multi-Scale Context Aggregation by Dilated Convolutions. Previous works that effectively use atrous convolution in different ways are, among others, OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks and [Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks] (http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing.
There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces
atrous_conv2d(value, filters, rate, padding=padding)
to the following three operations:
paddings = ... net = space_to_batch(value, paddings, block_size=rate) net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID") crops = ... net = batch_to_space(net, crops, block_size=rate)
Advanced usage. Note the following optimization: A sequence of atrous_conv2d
operations with identical rate
parameters, 'SAME' padding
, and filters
with odd heights/ widths:
net = atrous_conv2d(net, filters1, rate, padding="SAME") net = atrous_conv2d(net, filters2, rate, padding="SAME") ... net = atrous_conv2d(net, filtersK, rate, padding="SAME")
can be equivalently performed cheaper in terms of computation and memory as:
pad = ... # padding so that the input dims are multiples of rate net = space_to_batch(net, paddings=pad, block_size=rate) net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME") net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME") ... net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME") net = batch_to_space(net, crops=pad, block_size=rate)
because a pair of consecutive space_to_batch
and batch_to_space
ops with
the same block_size
cancel out when their respective paddings
and crops
inputs are identical.
Args:
value: A 4-D Tensor
of type float
. It needs to be in the default "NHWC"
format. Its shape is [batch, in_height, in_width, in_channels]
.
filters: A 4-D Tensor
with the same type as value
and shape
[filter_height, filter_width, in_channels, out_channels]
. filters
'
in_channels
dimension must match that of value
. Atrous convolution is
equivalent to standard convolution with upsampled filters with effective
height filter_height + (filter_height - 1) * (rate - 1)
and effective
width filter_width + (filter_width - 1) * (rate - 1)
, produced by
inserting rate - 1
zeros along consecutive elements across the
filters
' spatial dimensions.
rate: A positive int32. The stride with which we sample input values across
the height
and width
dimensions. Equivalently, the rate by which we
upsample the filter values by inserting zeros across the height
and
width
dimensions. In the literature, the same parameter is sometimes
called input stride
or dilation
.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
name: Optional name for the returned tensor.
Returns:
A Tensor
with the same type as value
.
Raises:
ValueError: If input/output depth does not match filters
' shape, or if
padding is other than 'VALID'
or 'SAME'
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def atrous_conv2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.atrous_conv2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.atrous_conv2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.atrous_conv2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def atrous_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.atrous_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.atrous_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.atrous_conv2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def audio_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.audio_summary(*args, **kwargs)
It accepts the same arguments as tensorflow.audio_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tensorflow.audio_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.audio_summary(x1, *args, **kwargs)(x2)
tensorflow.audio_summary
Outputs a `Summary` protocol buffer with audio.
The summary has up to max_outputs
summary values containing audio. The
audio is built from tensor
which must be 3-D with shape [batch_size,
frames, channels]
or 2-D with shape [batch_size, frames]
. The values are
assumed to be in the range of [-1.0, 1.0]
with a sample rate of
sample_rate
.
The tag
argument is a scalar Tensor
of type string
. It is used to
build the tag
of the summary values:
- If
max_outputs
is 1, the summary value tag is 'tag/audio'. - If
max_outputs
is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc.
Args:
tag: A scalar Tensor
of type string
. Used to build the tag
of the summary values.
tensor: A 3-D float32
Tensor
of shape [batch_size, frames, channels]
or a 2-D float32
Tensor
of shape [batch_size, frames]
.
sample_rate: The sample rate of the signal in hertz.
max_outputs: Max number of batch elements to generate audio for.
collections: Optional list of ops.GraphKeys. The collections to add the
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def avg_pool(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool(*args, **kwargs)
It accepts the same arguments as tf.nn.avg_pool
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.avg_pool(x1, *args, **kwargs)
is equivalent to
builder.avg_pool(*args, **kwargs)(x1)
tf.nn.avg_pool
Performs the average pooling on the input.
Each entry in output
is the mean of the corresponding size ksize
window in value
.
Args:
value: A 4-D Tensor
of shape [batch, height, width, channels]
and type
float32
, float64
, qint8
, quint8
, or qint32
.
ksize: A list of ints that has length >= 4.
The size of the window for each dimension of the input tensor.
strides: A list of ints that has length >= 4.
The stride of the sliding window for each dimension of the
input tensor.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
See the comment here
data_format: A string. 'NHWC' and 'NCHW' are supported.
name: Optional name for the operation.
Returns:
A Tensor
with the same type as value
. The average pooled output tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool2d(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.avg_pool2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.avg_pool2d(x1, *args, **kwargs)
is equivalent to
builder.avg_pool2d(*args, **kwargs)(x1)
tf.contrib.layers.avg_pool2d
Adds a 2D average pooling op.
It is assumed that the pooling is done per image but not in batch or channels.
Args:
inputs: A Tensor
of size [batch_size, height, width, channels].
kernel_size: A list of length 2: [kernel_height, kernel_width] of the
pooling kernel over which the op is computed. Can be an int if both
values are the same.
stride: A list of length 2: [stride_height, stride_width].
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: The padding method, either 'VALID' or 'SAME'.
outputs_collections: The collections to which the outputs are added.
scope: Optional scope for name_scope.
Returns:
A Tensor
representing the results of the pooling operation.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool3d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool3d(*args, **kwargs)
It accepts the same arguments as tf.nn.avg_pool3d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.avg_pool3d(x1, *args, **kwargs)
is equivalent to
builder.avg_pool3d(*args, **kwargs)(x1)
tf.nn.avg_pool3d
Performs 3D average pooling on the input.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [batch, depth, rows, cols, channels]
tensor to pool over.
ksize: A list of ints
that has length >= 5
.
1-D tensor of length 5. The size of the window for each dimension of
the input tensor. Must have ksize[0] = ksize[4] = 1
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
The average pooled output tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool3d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool3d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.avg_pool3d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.avg_pool3d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool3d_grad(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool3d_grad(*args, **kwargs)
It accepts the same arguments as tf.nn.avg_pool3d_grad
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.avg_pool3d_grad(x1, *args, **kwargs)
is equivalent to
builder.avg_pool3d_grad(*args, **kwargs)(x1)
tf.nn.avg_pool3d_grad
Computes gradients of average pooling function.
Args:
orig_input_shape: A Tensor
of type int32
.
The original input dimensions.
grad: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Output backprop of shape [batch, depth, rows, cols, channels]
.
ksize: A list of ints
that has length >= 5
.
1-D tensor of length 5. The size of the window for each dimension of
the input tensor. Must have ksize[0] = ksize[4] = 1
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as grad
. The backprop for input.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool3d_grad_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool3d_grad_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.avg_pool3d_grad_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.avg_pool3d_grad`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool3d_grad_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool3d_grad_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.avg_pool3d_grad_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.avg_pool3d_grad`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool3d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool3d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.avg_pool3d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.avg_pool3d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.avg_pool_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.avg_pool`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def avg_pool_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.avg_pool_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.avg_pool_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.avg_pool`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_norm_with_global_normalization(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_norm_with_global_normalization(*args, **kwargs)
It accepts the same arguments as tf.nn.batch_norm_with_global_normalization
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.batch_norm_with_global_normalization(x1, *args, **kwargs)
is equivalent to
builder.batch_norm_with_global_normalization(*args, **kwargs)(x1)
tf.nn.batch_norm_with_global_normalization
Batch normalization.
This op is deprecated. See tf.nn.batch_normalization
.
Args: t: A 4D input Tensor. m: A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof. v: A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof. beta: A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. gamma: A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor. variance_epsilon: A small float number to avoid dividing by 0. scale_after_normalization: A bool indicating whether the resulted tensor needs to be multiplied with gamma. name: A name for this operation (optional).
Returns:
A batch-normalized t
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_norm_with_global_normalization_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_norm_with_global_normalization_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.batch_norm_with_global_normalization_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.batch_norm_with_global_normalization`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_norm_with_global_normalization_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_norm_with_global_normalization_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.batch_norm_with_global_normalization_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.batch_norm_with_global_normalization`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_normalization(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_normalization(*args, **kwargs)
It accepts the same arguments as tf.nn.batch_normalization
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.batch_normalization(x1, *args, **kwargs)
is equivalent to
builder.batch_normalization(*args, **kwargs)(x1)
tf.nn.batch_normalization
Batch normalization.
As described in http://arxiv.org/abs/1502.03167.
Normalizes a tensor by mean
and variance
, and applies (optionally) a
scale
\\(\gamma\\) to it, as well as an offset
\\(\beta\\):
\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)
mean
, variance
, offset
and scale
are all expected to be of one of two
shapes:
* In all generality, they can have the same number of dimensions as the
input x
, with identical sizes as x
for the dimensions that are not
normalized over (the 'depth' dimension(s)), and dimension 1 for the
others which are being normalized over.
mean
and variance
in this case would typically be the outputs of
tf.nn.moments(..., keep_dims=True)
during training, or running averages
thereof during inference.
* In the common case where the 'depth' dimension is the last dimension in
the input tensor x
, they may be one dimensional tensors of the same
size as the 'depth' dimension.
This is the case for example for the common [batch, depth]
layout of
fully-connected layers, and [batch, height, width, depth]
for
convolutions.
mean
and variance
in this case would typically be the outputs of
tf.nn.moments(..., keep_dims=False)
during training, or running averages
thereof during inference.
Args:
x: Input Tensor
of arbitrary dimensionality.
mean: A mean Tensor
.
variance: A variance Tensor
.
offset: An offset Tensor
, often denoted \\(\beta\\) in equations, or
None. If present, will be added to the normalized tensor.
scale: A scale Tensor
, often denoted \\(\gamma\\) in equations, or
None
. If present, the scale is applied to the normalized tensor.
variance_epsilon: A small float number to avoid dividing by 0.
name: A name for this operation (optional).
Returns: the normalized, scaled, offset tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_normalization_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_normalization_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.batch_normalization_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.batch_normalization`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_normalization_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_normalization_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.batch_normalization_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.batch_normalization`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_to_space(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_to_space(*args, **kwargs)
It accepts the same arguments as tensorflow.batch_to_space
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.batch_to_space(x1, *args, **kwargs)
is equivalent to
builder.batch_to_space(*args, **kwargs)(x1)
tensorflow.batch_to_space
BatchToSpace for 4-D tensors of type T.
This is a legacy version of the more general BatchToSpaceND.
Rearranges (permutes) data from batch into blocks of spatial data, followed by
cropping. This is the reverse transformation of SpaceToBatch. More specifically,
this op outputs a copy of the input tensor where values from the batch
dimension are moved in spatial blocks to the height
and width
dimensions,
followed by cropping along the height
and width
dimensions.
Args:
input: A Tensor
. 4-D tensor with shape
[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
depth]
. Note that the batch size of the input tensor must be divisible by
block_size * block_size
.
crops: A Tensor
. Must be one of the following types: int32
, int64
.
2-D tensor of non-negative integers with shape [2, 2]
. It specifies
how many elements to crop from the intermediate result across the spatial
dimensions as follows:
crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
block_size: An int
that is >= 2
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
4-D with shape [batch, height, width, depth]
, where:
height = height_pad - crop_top - crop_bottom width = width_pad - crop_left - crop_right
The attr block_size
must be greater than one. It indicates the block size.
Some examples:
(1) For the following input of shape [4, 1, 1, 1]
and block_size of 2:
prettyprint
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
The output tensor has shape [1, 2, 2, 1]
and value:
prettyprint
x = [[[[1], [2]], [[3], [4]]]]
(2) For the following input of shape [4, 1, 1, 3]
and block_size of 2:
prettyprint
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
The output tensor has shape [1, 2, 2, 3]
and value:
prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
(3) For the following input of shape [4, 2, 2, 1]
and block_size of 2:
prettyprint
x = [[[[1], [3]], [[5], [7]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]
The output tensor has shape [1, 4, 4, 1]
and value:
prettyprint
x = [[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]
(4) For the following input of shape [8, 1, 2, 1]
and block_size of 2:
prettyprint
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
The output tensor has shape [2, 2, 4, 1]
and value:
prettyprint
x = [[[[1], [3]], [[5], [7]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def batch_to_space_nd(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.batch_to_space_nd(*args, **kwargs)
It accepts the same arguments as tensorflow.batch_to_space_nd
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.batch_to_space_nd(x1, *args, **kwargs)
is equivalent to
builder.batch_to_space_nd(*args, **kwargs)(x1)
tensorflow.batch_to_space_nd
BatchToSpace for N-D tensors of type T.
This operation reshapes the "batch" dimension 0 into M + 1
dimensions of shape
block_shape + [batch]
, interleaves these blocks back into the grid defined by
the spatial dimensions [1, ..., M]
, to obtain a result with the same rank as
the input. The spatial dimensions of this intermediate result are then
optionally cropped according to crops
to produce the output. This is the
reverse of SpaceToBatch. See below for a precise description.
Args:
input: A Tensor
.
N-D with shape input_shape = [batch] + spatial_shape + remaining_shape
,
where spatial_shape has M dimensions.
block_shape: A Tensor
. Must be one of the following types: int32
, int64
.
1-D with shape [M]
, all values must be >= 1.
crops: A Tensor
. Must be one of the following types: int32
, int64
.
2-D with shape [M, 2]
, all values must be >= 0.
crops[i] = [crop_start, crop_end]
specifies the amount to crop from input
dimension i + 1
, which corresponds to spatial dimension i
. It is
required that
crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]
.
This operation is equivalent to the following steps: 1. Reshape `input` to `reshaped` of shape: [block_shape[0], ..., block_shape[M-1], batch / prod(block_shape), input_shape[1], ..., input_shape[N-1]] 2. Permute dimensions of `reshaped` to produce `permuted` of shape [batch / prod(block_shape), input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] 3. Reshape `permuted` to produce `reshaped_permuted` of shape [batch / prod(block_shape), input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] 4. Crop the start and end of dimensions `[1, ..., M]` of `reshaped_permuted` according to `crops` to produce the output of shape: [batch / prod(block_shape), input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1], ..., input_shape[N-1]] Some examples: (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`: ```prettyprint [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] ``` The output tensor has shape `[1, 2, 2, 1]` and value: ```prettyprint x = [[[[1], [2]], [[3], [4]]]] ``` (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`: ```prettyprint [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] ``` The output tensor has shape `[1, 2, 2, 3]` and value: ```prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] ``` (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`: ```prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] ``` The output tensor has shape `[1, 4, 4, 1]` and value: ```prettyprint x = [[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]] ``` (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and `crops = [[0, 0], [2, 0]]`: ```prettyprint x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]] ``` The output tensor has shape `[2, 2, 4, 1]` and value: ```prettyprint x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ```
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def betainc(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.betainc(*args, **kwargs)
It accepts the same arguments as tensorflow.betainc
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.betainc(x1, *args, **kwargs)
is equivalent to
builder.betainc(*args, **kwargs)(x1)
tensorflow.betainc
Compute the regularized incomplete beta integral \\(I_x(a, b)\\).
The regularized incomplete beta integral is defined as:
I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}
where
B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt
is the incomplete beta function and \(B(a, b)\) is the complete beta function.
Args:
a: A Tensor
. Must be one of the following types: float32
, float64
.
b: A Tensor
. Must have the same type as a
.
x: A Tensor
. Must have the same type as a
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add(*args, **kwargs)
It accepts the same arguments as tf.nn.bias_add
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.bias_add(x1, *args, **kwargs)
is equivalent to
builder.bias_add(*args, **kwargs)(x1)
tf.nn.bias_add
Adds `bias` to `value`.
This is (mostly) a special case of tf.add
where bias
is restricted to 1-D.
Broadcasting is supported, so value
may have any number of dimensions.
Unlike tf.add
, the type of bias
is allowed to differ from value
in the
case where both types are quantized.
Args:
value: A Tensor
with type float
, double
, int64
, int32
, uint8
,
int16
, int8
, complex64
, or complex128
.
bias: A 1-D Tensor
with size matching the last dimension of value
.
Must be the same type as value
unless value
is a quantized type,
in which case a different quantized type may be used.
data_format: A string. 'NHWC' and 'NCHW' are supported.
name: A name for the operation (optional).
Returns:
A Tensor
with the same type as value
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.bias_add_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bias_add`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_grad(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_grad(*args, **kwargs)
It accepts the same arguments as tf.nn.bias_add_grad
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.bias_add_grad(x1, *args, **kwargs)
is equivalent to
builder.bias_add_grad(*args, **kwargs)(x1)
tf.nn.bias_add_grad
The backward operation for "BiasAdd" on the "bias" tensor.
It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.
Args:
out_backprop: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Any number of dimensions.
data_format: An optional string
from: "NHWC", "NCHW"
. Defaults to "NHWC"
.
Specify the data format of the input and output data. With the
default format "NHWC", the bias tensor will be added to the last dimension
of the value tensor.
Alternatively, the format could be "NCHW", the data storage order of:
[batch, in_channels, in_height, in_width].
The tensor will be added to "in_channels", the third-to-the-last
dimension.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as out_backprop
.
1-D with size the feature dimension of out_backprop
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_grad_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_grad_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.bias_add_grad_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bias_add_grad`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_grad_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_grad_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.bias_add_grad_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bias_add_grad`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.bias_add_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bias_add`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_v1(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_v1(*args, **kwargs)
It accepts the same arguments as tf.nn.bias_add_v1
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.bias_add_v1(x1, *args, **kwargs)
is equivalent to
builder.bias_add_v1(*args, **kwargs)(x1)
tf.nn.bias_add_v1
Adds `bias` to `value`.
This is a deprecated version of bias_add and will soon to be removed.
This is (mostly) a special case of tf.add
where bias
is restricted to 1-D.
Broadcasting is supported, so value
may have any number of dimensions.
Unlike tf.add
, the type of bias
is allowed to differ from value
in the
case where both types are quantized.
Args:
value: A Tensor
with type float
, double
, int64
, int32
, uint8
,
int16
, int8
, complex64
, or complex128
.
bias: A 1-D Tensor
with size matching the last dimension of value
.
Must be the same type as value
unless value
is a quantized type,
in which case a different quantized type may be used.
name: A name for the operation (optional).
Returns:
A Tensor
with the same type as value
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_v1_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_v1_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.bias_add_v1_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bias_add_v1`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bias_add_v1_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bias_add_v1_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.bias_add_v1_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bias_add_v1`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bidirectional_dynamic_rnn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bidirectional_dynamic_rnn(*args, **kwargs)
It accepts the same arguments as tf.nn.bidirectional_dynamic_rnn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.bidirectional_dynamic_rnn(x1, *args, **kwargs)
is equivalent to
builder.bidirectional_dynamic_rnn(*args, **kwargs)(x1)
tf.nn.bidirectional_dynamic_rnn
Creates a dynamic version of bidirectional recurrent neural network.
Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Args:
cell_fw: An instance of RNNCell, to be used for forward direction.
cell_bw: An instance of RNNCell, to be used for backward direction.
inputs: The RNN inputs.
If time_major == False (default), this must be a tensor of shape:
[batch_size, max_time, input_size]
.
If time_major == True, this must be a tensor of shape:
[max_time, batch_size, input_size]
.
[batch_size, input_size].
sequence_length: An int32/int64 vector, size [batch_size]
,
containing the actual lengths for each of the sequences.
initial_state_fw: (optional) An initial state for the forward RNN.
This must be a tensor of appropriate type and shape
[batch_size, cell_fw.state_size]
.
If cell_fw.state_size
is a tuple, this should be a tuple of
tensors having shapes [batch_size, s] for s in cell_fw.state_size
.
initial_state_bw: (optional) Same as for initial_state_fw
, but using
the corresponding properties of cell_bw
.
dtype: (optional) The data type for the initial states and expected output.
Required if initial_states are not provided or RNN states have a
heterogeneous dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency
and can be run in parallel, will be. This parameter trades off
time for space. Values >> 1 use more memory but take less time,
while smaller values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the inputs
and outputs
Tensors.
If true, these Tensors
must be shaped [max_time, batch_size, depth]
.
If false, these Tensors
must be shaped [batch_size, max_time, depth]
.
Using time_major = True
is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
dtype: (optional) The data type for the initial state. Required if
initial_state is not provided.
sequence_length: An int32/int64 vector, size [batch_size]
,
containing the actual lengths for each of the sequences.
either of the initial states are not provided.
scope: VariableScope for the created subgraph; defaults to "BiRNN"
Returns:
A tuple (outputs, output_states) where:
outputs: A tuple (output_fw, output_bw) containing the forward and
the backward rnn output Tensor
.
If time_major == False (default),
output_fw will be a Tensor
shaped:
[batch_size, max_time, cell_fw.output_size]
and output_bw will be a Tensor
shaped:
[batch_size, max_time, cell_bw.output_size]
.
If time_major == True,
output_fw will be a Tensor
shaped:
[max_time, batch_size, cell_fw.output_size]
and output_bw will be a Tensor
shaped:
[max_time, batch_size, cell_bw.output_size]
.
It returns a tuple instead of a single concatenated Tensor
, unlike
in the bidirectional_rnn
. If the concatenated one is preferred,
the forward and backward outputs can be concatenated as
tf.concat(2, outputs)
.
output_states: A tuple (output_state_fw, output_state_bw) containing
the forward and the backward final states of bidirectional rnn.
Raises:
TypeError: If cell_fw
or cell_bw
is not an instance of RNNCell
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bidirectional_dynamic_rnn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bidirectional_dynamic_rnn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.bidirectional_dynamic_rnn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bidirectional_dynamic_rnn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bidirectional_dynamic_rnn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bidirectional_dynamic_rnn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.bidirectional_dynamic_rnn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bidirectional_dynamic_rnn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bidirectional_rnn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bidirectional_rnn(*args, **kwargs)
It accepts the same arguments as tf.nn.bidirectional_rnn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.bidirectional_rnn(x1, *args, **kwargs)
is equivalent to
builder.bidirectional_rnn(*args, **kwargs)(x1)
tf.nn.bidirectional_rnn
Creates a bidirectional recurrent neural network.
Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Args:
cell_fw: An instance of RNNCell, to be used for forward direction.
cell_bw: An instance of RNNCell, to be used for backward direction.
inputs: A length T list of inputs, each a tensor of shape
[batch_size, input_size], or a nested tuple of such elements.
initial_state_fw: (optional) An initial state for the forward RNN.
This must be a tensor of appropriate type and shape
[batch_size, cell_fw.state_size]
.
If cell_fw.state_size
is a tuple, this should be a tuple of
tensors having shapes [batch_size, s] for s in cell_fw.state_size
.
initial_state_bw: (optional) Same as for initial_state_fw
, but using
the corresponding properties of cell_bw
.
dtype: (optional) The data type for the initial state. Required if
either of the initial states are not provided.
sequence_length: (optional) An int32/int64 vector, size [batch_size]
,
containing the actual lengths for each of the sequences.
scope: VariableScope for the created subgraph; defaults to "BiRNN"
Returns:
A tuple (outputs, output_state_fw, output_state_bw) where:
outputs is a length T
list of outputs (one for each input), which
are depth-concatenated forward and backward outputs.
output_state_fw is the final state of the forward rnn.
output_state_bw is the final state of the backward rnn.
Raises:
TypeError: If cell_fw
or cell_bw
is not an instance of RNNCell
.
ValueError: If inputs is None or an empty list.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bidirectional_rnn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bidirectional_rnn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.bidirectional_rnn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bidirectional_rnn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bidirectional_rnn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bidirectional_rnn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.bidirectional_rnn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.bidirectional_rnn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def bitcast(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.bitcast(*args, **kwargs)
It accepts the same arguments as tensorflow.bitcast
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.bitcast(x1, *args, **kwargs)
is equivalent to
builder.bitcast(*args, **kwargs)(x1)
tensorflow.bitcast
Bitcasts a tensor from one type to another without copying data.
Given a tensor input
, this operation returns a tensor that has the same buffer
data as input
with datatype type
.
If the input datatype T
is larger than the output datatype type
then the
shape changes from [...] to [..., sizeof(T
)/sizeof(type
)].
If T
is smaller than type
, the operator requires that the rightmost
dimension be equal to sizeof(type
)/sizeof(T
). The shape then goes from
[..., sizeof(type
)/sizeof(T
)] to [...].
NOTE: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
type: A tf.DType
from: tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half
.
name: A name for the operation (optional).
Returns:
A Tensor
of type type
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def boolean_mask(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.boolean_mask(*args, **kwargs)
It accepts the same arguments as tensorflow.boolean_mask
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.boolean_mask(x1, *args, **kwargs)
is equivalent to
builder.boolean_mask(*args, **kwargs)(x1)
tensorflow.boolean_mask
Apply boolean mask to tensor. Numpy equivalent is `tensor[mask]`.
```python
1-D example
tensor = [0, 1, 2, 3] mask = [True, False, True, False] boolean_mask(tensor, mask) ==> [0, 2] ```
In general, 0 < dim(mask) = K <= dim(tensor)
, and mask
's shape must match
the first K dimensions of tensor
's shape. We then have:
boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]
where (i1,...,iK)
is the ith True
entry of mask
(row-major order).
Args: tensor: N-D tensor. mask: K-D boolean tensor, K <= N and K must be known statically. name: A name for this operation (optional).
Returns:
Tensor populated by entries in tensor
corresponding to True
values in
mask
.
Raises: ValueError: If shapes do not conform.
Examples:
```python
2-D example
tensor = [[1, 2], [3, 4], [5, 6]] mask = [True, False, True] boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]] ```
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def case(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.case(*args, **kwargs)
It accepts the same arguments as tensorflow.case
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.case(x1, *args, **kwargs)
is equivalent to
builder.case(*args, **kwargs)(x1)
tensorflow.case
Create a case operation.
The pred_fn_pairs
parameter is a dict or list of pairs of size N.
Each pair contains a boolean scalar tensor and a python callable that
creates the tensors to be returned if the boolean evaluates to True.
default
is a callable generating a list of tensors. All the callables
in pred_fn_pairs
as well as default
should return the same number
and types of tensors.
If exclusive==True
, all predicates are evaluated, and a logging operation
with an error is returned if more than one of the predicates evaluates to
True. If exclusive==False
, execution stops are the first predicate which
evaluates to True, and the tensors generated by the corresponding function
are returned immediately. If none of the predicates evaluate to True, this
operation returns the tensors generated by default
.
Example 1:
Pseudocode:
if (x < y) return 17;
else return 23;
Expressions:
f1 = lambda: tf.constant(17)
f2 = lambda: tf.constant(23)
r = case([(tf.less(x, y), f1)], default=f2)
Example 2:
Pseudocode:
if (x < y && x > z) raise OpError("Only one predicate may evaluate true");
if (x < y) return 17;
else if (x > z) return 23;
else return -1;
Expressions:
x = tf.constant(0)
y = tf.constant(1)
z = tf.constant(2)
def f1(): return tf.constant(17)
def f2(): return tf.constant(23)
def f3(): return tf.constant(-1)
r = case({tf.less(x, y): f1, tf.greater(x, z): f2},
default=f3, exclusive=True)
Args: pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a callable which returns a list of tensors. default: A callable that returns a list of tensors. exclusive: True iff more than one predicate is allowed to evaluate to True. name: A name for this operation (optional).
Returns:
The tensors returned by the first pair whose predicate evaluated to True, or
those returned by default
if none does.
Raises:
TypeError: If pred_fn_pairs
is not a list/dictionary.
TypeError: If pred_fn_pairs
is a list but does not contain 2-tuples.
TypeError: If fns[i]
is not callable for any i, or default
is not
callable.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cast(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cast(*args, **kwargs)
It accepts the same arguments as tensorflow.cast
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cast(x1, *args, **kwargs)
is equivalent to
builder.cast(*args, **kwargs)(x1)
tensorflow.cast
Casts a tensor to a new type.
The operation casts x
(in case of Tensor
) or x.values
(in case of SparseTensor
) to dtype
.
For example:
```python
tensor a
is [1.8, 2.2], dtype=tf.float
tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32 ```
Args:
x: A Tensor
or SparseTensor
.
dtype: The destination type.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
with same shape as x
.
Raises:
TypeError: If x
cannot be cast to the dtype
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ceil(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ceil(*args, **kwargs)
It accepts the same arguments as tensorflow.ceil
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ceil(x1, *args, **kwargs)
is equivalent to
builder.ceil(*args, **kwargs)(x1)
tensorflow.ceil
Returns element-wise smallest integer in not less than x.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def check_numerics(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.check_numerics(*args, **kwargs)
It accepts the same arguments as tensorflow.check_numerics
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.check_numerics(x1, *args, **kwargs)
is equivalent to
builder.check_numerics(*args, **kwargs)(x1)
tensorflow.check_numerics
Checks a tensor for NaN and Inf values.
When run, reports an InvalidArgument
error if tensor
has any values
that are not a number (NaN) or infinity (Inf). Otherwise, passes tensor
as-is.
Args:
tensor: A Tensor
. Must be one of the following types: half
, float32
, float64
.
message: A string
. Prefix of the error message.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cholesky(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cholesky(*args, **kwargs)
It accepts the same arguments as tensorflow.cholesky
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cholesky(x1, *args, **kwargs)
is equivalent to
builder.cholesky(*args, **kwargs)(x1)
tensorflow.cholesky
Computes the Cholesky decomposition of one or more square matrices.
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices, with the same constraints as the single matrix Cholesky
decomposition above. The output is a tensor of the same shape as the input
containing the Cholesky decompositions for all input submatrices [..., :, :]
.
Args:
input: A Tensor
. Must be one of the following types: float64
, float32
.
Shape is [..., M, M]
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M, M]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cholesky_solve(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cholesky_solve(*args, **kwargs)
It accepts the same arguments as tensorflow.cholesky_solve
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cholesky_solve(x1, *args, **kwargs)
is equivalent to
builder.cholesky_solve(*args, **kwargs)(x1)
tensorflow.cholesky_solve
Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.
```python
Solve 10 separate 2x2 linear systems:
A = ... # shape 10 x 2 x 2 RHS = ... # shape 10 x 2 x 1 chol = tf.cholesky(A) # shape 10 x 2 x 2 X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1
tf.matmul(A, X) ~ RHS
X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
Solve five linear systems (K = 5) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2 RHS = ... # shape 10 x 2 x 5 ... X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2] ```
Args:
chol: A Tensor
. Must be float32
or float64
, shape is [..., M, M]
.
Cholesky factorization of A
, e.g. chol = tf.cholesky(A)
.
For that reason, only the lower triangular parts (including the diagonal)
of the last two dimensions of chol
are used. The strictly upper part is
assumed to be zero and not accessed.
rhs: A Tensor
, same type as chol
, shape is [..., M, K]
.
name: A name to give this Op
. Defaults to cholesky_solve
.
Returns:
Solution to A x = rhs
, shape [..., M, K]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def clip_by_average_norm(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.clip_by_average_norm(*args, **kwargs)
It accepts the same arguments as tensorflow.clip_by_average_norm
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.clip_by_average_norm(x1, *args, **kwargs)
is equivalent to
builder.clip_by_average_norm(*args, **kwargs)(x1)
tensorflow.clip_by_average_norm
Clips tensor values to a maximum average L2-norm.
Given a tensor t
, and a maximum clip value clip_norm
, this operation
normalizes t
so that its average L2-norm is less than or equal to
clip_norm
. Specifically, if the average L2-norm is already less than or
equal to clip_norm
, then t
is not modified. If the average L2-norm is
greater than clip_norm
, then this operation returns a tensor of the same
type and shape as t
with its values set to:
t * clip_norm / l2norm_avg(t)
In this case, the average L2-norm of the output tensor is clip_norm
.
This operation is typically used to clip gradients before applying them with an optimizer.
Args:
t: A Tensor
.
clip_norm: A 0-D (scalar) Tensor
> 0. A maximum clipping value.
name: A name for the operation (optional).
Returns:
A clipped Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def clip_by_global_norm(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.clip_by_global_norm(*args, **kwargs)
It accepts the same arguments as tensorflow.clip_by_global_norm
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.clip_by_global_norm(x1, *args, **kwargs)
is equivalent to
builder.clip_by_global_norm(*args, **kwargs)(x1)
tensorflow.clip_by_global_norm
Clips values of multiple tensors by the ratio of the sum of their norms.
Given a tuple or list of tensors t_list
, and a clipping ratio clip_norm
,
this operation returns a list of clipped tensors list_clipped
and the global norm (global_norm
) of all tensors in t_list
. Optionally,
if you've already computed the global norm for t_list
, you can specify
the global norm with use_norm
.
To perform the clipping, the values t_list[i]
are set to:
t_list[i] * clip_norm / max(global_norm, clip_norm)
where:
global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))
If clip_norm > global_norm
then the entries in t_list
remain as they are,
otherwise they're all shrunk by the global ratio.
Any of the entries of t_list
that are of type None
are ignored.
This is the correct way to perform gradient clipping (for example, see Pascanu et al., 2012 (pdf)).
However, it is slower than clip_by_norm()
because all the parameters must be
ready before the clipping operation can be performed.
Args:
t_list: A tuple or list of mixed Tensors
, IndexedSlices
, or None.
clip_norm: A 0-D (scalar) Tensor
> 0. The clipping ratio.
use_norm: A 0-D (scalar) Tensor
of type float
(optional). The global
norm to use. If not provided, global_norm()
is used to compute the norm.
name: A name for the operation (optional).
Returns:
list_clipped: A list of Tensors
of the same type as list_t
.
global_norm: A 0-D (scalar) Tensor
representing the global norm.
Raises:
TypeError: If t_list
is not a sequence.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def clip_by_norm(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.clip_by_norm(*args, **kwargs)
It accepts the same arguments as tensorflow.clip_by_norm
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.clip_by_norm(x1, *args, **kwargs)
is equivalent to
builder.clip_by_norm(*args, **kwargs)(x1)
tensorflow.clip_by_norm
Clips tensor values to a maximum L2-norm.
Given a tensor t
, and a maximum clip value clip_norm
, this operation
normalizes t
so that its L2-norm is less than or equal to clip_norm
,
along the dimensions given in axes
. Specifically, in the default case
where all dimensions are used for calculation, if the L2-norm of t
is
already less than or equal to clip_norm
, then t
is not modified. If
the L2-norm is greater than clip_norm
, then this operation returns a
tensor of the same type and shape as t
with its values set to:
t * clip_norm / l2norm(t)
In this case, the L2-norm of the output tensor is clip_norm
.
As another example, if t
is a matrix and axes == [1]
, then each row
of the output will have L2-norm equal to clip_norm
. If axes == [0]
instead, each column of the output will be clipped.
This operation is typically used to clip gradients before applying them with an optimizer.
Args:
t: A Tensor
.
clip_norm: A 0-D (scalar) Tensor
> 0. A maximum clipping value.
axes: A 1-D (vector) Tensor
of type int32 containing the dimensions
to use for computing the L2-norm. If None
(the default), uses all
dimensions.
name: A name for the operation (optional).
Returns:
A clipped Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def clip_by_value(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.clip_by_value(*args, **kwargs)
It accepts the same arguments as tensorflow.clip_by_value
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.clip_by_value(x1, *args, **kwargs)
is equivalent to
builder.clip_by_value(*args, **kwargs)(x1)
tensorflow.clip_by_value
Clips tensor values to a specified min and max.
Given a tensor t
, this operation returns a tensor of the same type and
shape as t
with its values clipped to clip_value_min
and clip_value_max
.
Any values less than clip_value_min
are set to clip_value_min
. Any values
greater than clip_value_max
are set to clip_value_max
.
Args:
t: A Tensor
.
clip_value_min: A 0-D (scalar) Tensor
. The minimum value to clip by.
clip_value_max: A 0-D (scalar) Tensor
. The maximum value to clip by.
name: A name for the operation (optional).
Returns:
A clipped Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def complex(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.complex(*args, **kwargs)
It accepts the same arguments as tensorflow.complex
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.complex(x1, *args, **kwargs)
is equivalent to
builder.complex(*args, **kwargs)(x1)
tensorflow.complex
Converts two real numbers to a complex number.
Given a tensor real
representing the real part of a complex number, and a
tensor imag
representing the imaginary part of a complex number, this
operation returns complex numbers elementwise of the form (a + bj), where
a represents the real
part and b represents the imag
part.
The input tensors real
and imag
must have the same shape.
For example:
```
tensor 'real' is [2.25, 3.25]
tensor imag
is [4.75, 5.75]
tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] ```
Args:
real: A Tensor
. Must be one of the following types: float32
, float64
.
imag: A Tensor
. Must have the same type as real
.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
or complex128
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def complex_abs(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.complex_abs(*args, **kwargs)
It accepts the same arguments as tensorflow.complex_abs
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.complex_abs(x1, *args, **kwargs)
is equivalent to
builder.complex_abs(*args, **kwargs)(x1)
tensorflow.complex_abs
Computes the complex absolute value of a tensor.
Given a tensor x
of complex numbers, this operation returns a tensor of type
float32
or float64
that is the absolute value of each element in x
. All
elements in x
must be complex numbers of the form \(a + bj\). The
absolute value is computed as \( \sqrt{a^2 + b^2}\).
For example:
```
tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]
tf.complex_abs(x) ==> [5.25594902, 6.60492229] ```
Args:
x: A Tensor
of type complex64
or complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
of type float32
or float64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def compute_accidental_hits(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.compute_accidental_hits(*args, **kwargs)
It accepts the same arguments as tf.nn.compute_accidental_hits
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.compute_accidental_hits(x1, *args, **kwargs)
is equivalent to
builder.compute_accidental_hits(*args, **kwargs)(x1)
tf.nn.compute_accidental_hits
Compute the position ids in `sampled_candidates` matching `true_classes`.
In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic.
See our Candidate Sampling Algorithms Reference.
We presuppose that the sampled_candidates
are unique.
We call it an 'accidental hit' when one of the target classes
matches one of the sampled classes. This operation reports
accidental hits as triples (index, id, weight)
, where index
represents the row number in true_classes
, id
represents the
position in sampled_candidates
, and weight is -FLOAT_MAX
.
The result of this op should be passed through a sparse_to_dense
operation, then added to the logits of the sampled classes. This
removes the contradictory effect of accidentally sampling the true
target classes as noise classes for the same example.
Args:
true_classes: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
sampled_candidates: A tensor of type int64
and shape [num_sampled]
.
The sampled_candidates output of CandidateSampler.
num_true: An int
. The number of target classes per training example.
seed: An int
. An operation-specific seed. Default is 0.
name: A name for the operation (optional).
Returns:
indices: A Tensor
of type int32
and shape [num_accidental_hits]
.
Values indicate rows in true_classes
.
ids: A Tensor
of type int64
and shape [num_accidental_hits]
.
Values indicate positions in sampled_candidates
.
weights: A Tensor
of type float
and shape [num_accidental_hits]
.
Each value is -FLOAT_MAX
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def compute_accidental_hits_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.compute_accidental_hits_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.compute_accidental_hits_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.compute_accidental_hits`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def compute_accidental_hits_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.compute_accidental_hits_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.compute_accidental_hits_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.compute_accidental_hits`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def concat(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.concat(*args, **kwargs)
It accepts the same arguments as tensorflow.concat
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tensorflow.concat(x1, x2, *args, **kwargs)
is equivalent to
builder.concat(x1, *args, **kwargs)(x2)
tensorflow.concat
Concatenates tensors along one dimension.
Concatenates the list of tensors values
along dimension concat_dim
. If
values[i].shape = [D0, D1, ... Dconcat_dim(i), ...Dn]
, the concatenated
result has shape
[D0, D1, ... Rconcat_dim, ...Dn]
where
Rconcat_dim = sum(Dconcat_dim(i))
That is, the data from the input tensors is joined along the concat_dim
dimension.
The number of dimensions of the input tensors must match, and all dimensions
except concat_dim
must be equal.
For example:
```python t1 = [[1, 2, 3], [4, 5, 6]] t2 = [[7, 8, 9], [10, 11, 12]] tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
tensor t3 with shape [2, 3]
tensor t4 with shape [2, 3]
tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3] tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6] ```
Note: If you are concatenating along a new axis consider using pack. E.g.
python
tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors])
can be rewritten as
python
tf.pack(tensors, axis=axis)
Args:
concat_dim: 0-D int32
Tensor
. Dimension along which to concatenate.
values: A list of Tensor
objects or a single Tensor
.
name: A name for the operation (optional).
Returns:
A Tensor
resulting from concatenation of the input tensors.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def cond(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cond(*args, **kwargs)
It accepts the same arguments as tensorflow.cond
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cond(x1, *args, **kwargs)
is equivalent to
builder.cond(*args, **kwargs)(x1)
tensorflow.cond
Return either fn1() or fn2() based on the boolean predicate `pred`.
fn1
and fn2
both return lists of output tensors. fn1
and fn2
must have
the same non-zero number and type of outputs.
Note that the conditional execution applies only to the operations defined in fn1 and fn2. Consider the following simple program:
python
z = tf.mul(a, b)
result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.mul operation is always executed, unconditionally. Although this behavior is consistent with the dataflow model of TensorFlow, it has occasionally surprised some users who expected a lazier semantics.
Args:
pred: A scalar determining whether to return the result of fn1
or fn2
.
fn1: The callable to be performed if pred is true.
fn2: The callable to be performed if pref is false.
name: Optional name prefix for the returned tensors.
Returns:
Tensors returned by the call to either fn1
or fn2
. If the callables
return a singleton list, the element is extracted from the list.
Raises:
TypeError: if fn1
or fn2
is not callable.
ValueError: if fn1
and fn2
do not return the same number of tensors, or
return tensors of different types.
Example:
python
x = tf.constant(2)
y = tf.constant(5)
def f1(): return tf.mul(x, 17)
def f2(): return tf.add(y, 23)
r = cond(tf.less(x, y), f1, f2)
# r is set to f1().
# Operations in f2 (e.g., tf.add) are not executed.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conj(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conj(*args, **kwargs)
It accepts the same arguments as tensorflow.conj
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.conj(x1, *args, **kwargs)
is equivalent to
builder.conj(*args, **kwargs)(x1)
tensorflow.conj
Returns the complex conjugate of a complex number.
Given a tensor input
of complex numbers, this operation returns a tensor of
complex numbers that are the complex conjugate of each element in input
. The
complex numbers in input
must be of the form \(a + bj\), where a is the
real part and b is the imaginary part.
The complex conjugate returned by this operation is of the form \(a - bj\).
For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
If x
is real, it is returned unchanged.
Args:
x: Tensor
to conjugate. Must have numeric type.
name: A name for the operation (optional).
Returns:
A Tensor
that is the conjugate of x
(with the same type).
Raises:
TypeError: If x
is not a numeric tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def constant(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.constant(*args, **kwargs)
It accepts the same arguments as tensorflow.constant
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.constant(x1, *args, **kwargs)
is equivalent to
builder.constant(*args, **kwargs)(x1)
tensorflow.constant
Creates a constant tensor.
The resulting tensor is populated with values of type dtype
, as
specified by arguments value
and (optionally) shape
(see examples
below).
The argument value
can be a constant value, or a list of values of type
dtype
. If value
is a list, then the length of the list must be less
than or equal to the number of elements implied by the shape
argument (if
specified). In the case where the list length is less than the number of
elements specified by shape
, the last element in the list will be used
to fill the remaining entries.
The argument shape
is optional. If present, it specifies the dimensions of
the resulting tensor. If not present, the shape of value
is used.
If the argument dtype
is not specified, then the type is inferred from
the type of value
.
For example:
```python # Constant 1-D Tensor populated with value list. tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]
# Constant 2-D tensor populated with scalar value -1. tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.] [-1. -1. -1.]] ```
Args:
value: A constant value (or list) of output type dtype
.
dtype: The type of the elements of the resulting tensor.
shape: Optional dimensions of resulting tensor.
name: Optional name for the tensor.
Returns: A Constant Tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def constant_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.constant_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.constant_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.constant_initializer(x1, *args, **kwargs)
is equivalent to
builder.constant_initializer(*args, **kwargs)(x1)
tensorflow.constant_initializer
Returns an initializer that generates tensors with constant values.
The resulting tensor is populated with values of type dtype
, as
specified by arguments value
following the desired shape
of the
new tensor (see examples below).
The argument value
can be a constant value, or a list of values of type
dtype
. If value
is a list, then the length of the list must be less
than or equal to the number of elements implied by the desired shape of the
tensor. In the case where the total number of elements in value
is less
than the number of elements required by the tensor shape, the last element
in value
will be used to fill the remaining entries. If the total number of
elements in value
is greater than the number of elements required by the
tensor shape, the initializer will raise a ValueError
.
Args:
value: A Python scalar, list of values, or a N-dimensional numpy array. All
elements of the initialized variable will be set to the corresponding
value in the value
argument.
dtype: The data type.
Returns: An initializer that generates tensors with constant values.
Examples:
The following example can be rewritten using a numpy.ndarray instead
of the value
list, even reshaped, as shown in the two commented lines
below the value
list initialization.
```python
import numpy as np import tensorflow as tf
value = [0, 1, 2, 3, 4, 5, 6, 7]
value = np.array(value)
value = value.reshape([2, 4])
init = tf.constant_initializer(value)
print('fitting shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[2, 4], initializer=init) x.initializer.run() print(x.eval())
fitting shape: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.]]
print('larger shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[3, 4], initializer=init) x.initializer.run() print(x.eval())
larger shape: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 7. 7. 7. 7.]]
print('smaller shape:') tf.reset_default_graph() with tf.Session(): x = tf.get_variable('x', shape=[2, 3], initializer=init)
ValueError: Too many elements provided. Needed at most 6, but received 8 ```
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def container(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.container(*args, **kwargs)
It accepts the same arguments as tensorflow.container
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.container(x1, *args, **kwargs)
is equivalent to
builder.container(*args, **kwargs)(x1)
tensorflow.container
Wrapper for `Graph.container()` using the default graph.
Args: container_name: The container string to use in the context.
Returns: A context manager that specifies the default container to use for newly created stateful ops.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def control_dependencies(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.control_dependencies(*args, **kwargs)
It accepts the same arguments as tensorflow.control_dependencies
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.control_dependencies(x1, *args, **kwargs)
is equivalent to
builder.control_dependencies(*args, **kwargs)(x1)
tensorflow.control_dependencies
Wrapper for `Graph.control_dependencies()` using the default graph.
See Graph.control_dependencies()
for more details.
Args:
control_inputs: A list of Operation
or Tensor
objects which
must be executed or computed before running the operations
defined in the context. Can also be None
to clear the control
dependencies.
Returns: A context manager that specifies control dependencies for all operations constructed within the context.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv1d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv1d(*args, **kwargs)
It accepts the same arguments as tf.nn.conv1d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv1d(x1, *args, **kwargs)
is equivalent to
builder.conv1d(*args, **kwargs)(x1)
tf.nn.conv1d
Computes a 1-D convolution given 3-D input and filter tensors.
Given an input tensor of shape [batch, in_width, in_channels] and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation.
Internally, this op reshapes the input tensors and invokes
tf.nn.conv2d
. A tensor of shape [batch, in_width, in_channels]
is reshaped to [batch, 1, in_width, in_channels], and the filter
is reshaped to [1, filter_width, in_channels, out_channels].
The result is then reshaped back to [batch, out_width, out_channels]
(where out_width is a function of the stride and padding as in
conv2d) and returned to the caller.
Args:
value: A 3D Tensor
. Must be of type float32
or float64
.
filters: A 3D Tensor
. Must have the same type as input
.
stride: An integer
. The number of entries by which
the filter is moved right at each step.
padding: 'SAME' or 'VALID'
use_cudnn_on_gpu: An optional bool
. Defaults to True
.
data_format: An optional string
from "NHWC", "NCHW"
. Defaults
to "NHWC"
, the data is stored in the order of
[batch, in_width, in_channels]. The "NCHW"
format stores
data as [batch, in_channels, in_width].
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv1d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv1d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv1d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv1d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv1d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv1d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv1d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv1d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d(*args, **kwargs)
It accepts the same arguments as tf.nn.conv2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv2d(x1, *args, **kwargs)
is equivalent to
builder.conv2d(*args, **kwargs)(x1)
tf.nn.conv2d
Computes a 2-D convolution given 4-D `input` and `filter` tensors.
Given an input tensor of shape [batch, in_height, in_width, in_channels]
and a filter / kernel tensor of shape
[filter_height, filter_width, in_channels, out_channels]
, this op
performs the following:
- Flattens the filter to a 2-D matrix with shape
[filter_height * filter_width * in_channels, output_channels]
. - Extracts image patches from the input tensor to form a virtual
tensor of shape
[batch, out_height, out_width, filter_height * filter_width * in_channels]
. - For each patch, right-multiplies the filter matrix and the image patch vector.
In detail, with the default NHWC format,
output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]
Must have strides[0] = strides[3] = 1
. For the most common case of the same
horizontal and vertices strides, strides = [1, stride, stride, 1]
.
Args:
input: A Tensor
. Must be one of the following types: half
, float32
, float64
.
filter: A Tensor
. Must have the same type as input
.
strides: A list of ints
.
1-D of length 4. The stride of the sliding window for each dimension
of input
. Must be in the same order as the dimension specified with format.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
use_cudnn_on_gpu: An optional bool
. Defaults to True
.
data_format: An optional string
from: "NHWC", "NCHW"
. Defaults to "NHWC"
.
Specify the data format of the input and output data. With the
default format "NHWC", the data is stored in the order of:
[batch, in_height, in_width, in_channels].
Alternatively, the format could be "NCHW", the data storage order of:
[batch, in_channels, in_height, in_width].
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_backprop_filter(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_backprop_filter(*args, **kwargs)
It accepts the same arguments as tf.nn.conv2d_backprop_filter
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv2d_backprop_filter(x1, *args, **kwargs)
is equivalent to
builder.conv2d_backprop_filter(*args, **kwargs)(x1)
tf.nn.conv2d_backprop_filter
Computes the gradients of convolution with respect to the filter.
Args:
input: A Tensor
. Must be one of the following types: half
, float32
, float64
.
4-D with shape [batch, in_height, in_width, in_channels]
.
filter_sizes: A Tensor
of type int32
.
An integer vector representing the tensor shape of filter
,
where filter
is a 4-D
[filter_height, filter_width, in_channels, out_channels]
tensor.
out_backprop: A Tensor
. Must have the same type as input
.
4-D with shape [batch, out_height, out_width, out_channels]
.
Gradients w.r.t. the output of the convolution.
strides: A list of ints
.
The stride of the sliding window for each dimension of the input
of the convolution. Must be in the same order as the dimension specified with
format.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
use_cudnn_on_gpu: An optional bool
. Defaults to True
.
data_format: An optional string
from: "NHWC", "NCHW"
. Defaults to "NHWC"
.
Specify the data format of the input and output data. With the
default format "NHWC", the data is stored in the order of:
[batch, in_height, in_width, in_channels].
Alternatively, the format could be "NCHW", the data storage order of:
[batch, in_channels, in_height, in_width].
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. 4-D with shape
[filter_height, filter_width, in_channels, out_channels]
. Gradient w.r.t.
the filter
input of the convolution.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_backprop_filter_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_backprop_filter_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv2d_backprop_filter_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d_backprop_filter`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_backprop_filter_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_backprop_filter_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv2d_backprop_filter_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d_backprop_filter`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_backprop_input(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_backprop_input(*args, **kwargs)
It accepts the same arguments as tf.nn.conv2d_backprop_input
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv2d_backprop_input(x1, *args, **kwargs)
is equivalent to
builder.conv2d_backprop_input(*args, **kwargs)(x1)
tf.nn.conv2d_backprop_input
Computes the gradients of convolution with respect to the input.
Args:
input_sizes: A Tensor
of type int32
.
An integer vector representing the shape of input
,
where input
is a 4-D [batch, height, width, channels]
tensor.
filter: A Tensor
. Must be one of the following types: half
, float32
, float64
.
4-D with shape
[filter_height, filter_width, in_channels, out_channels]
.
out_backprop: A Tensor
. Must have the same type as filter
.
4-D with shape [batch, out_height, out_width, out_channels]
.
Gradients w.r.t. the output of the convolution.
strides: A list of ints
.
The stride of the sliding window for each dimension of the input
of the convolution. Must be in the same order as the dimension specified with
format.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
use_cudnn_on_gpu: An optional bool
. Defaults to True
.
data_format: An optional string
from: "NHWC", "NCHW"
. Defaults to "NHWC"
.
Specify the data format of the input and output data. With the
default format "NHWC", the data is stored in the order of:
[batch, in_height, in_width, in_channels].
Alternatively, the format could be "NCHW", the data storage order of:
[batch, in_channels, in_height, in_width].
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as filter
.
4-D with shape [batch, in_height, in_width, in_channels]
. Gradient
w.r.t. the input of the convolution.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_backprop_input_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_backprop_input_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv2d_backprop_input_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d_backprop_input`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_backprop_input_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_backprop_input_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv2d_backprop_input_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d_backprop_input`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_transpose(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_transpose(*args, **kwargs)
It accepts the same arguments as tf.nn.conv2d_transpose
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv2d_transpose(x1, *args, **kwargs)
is equivalent to
builder.conv2d_transpose(*args, **kwargs)(x1)
tf.nn.conv2d_transpose
The transpose of `conv2d`.
This operation is sometimes called "deconvolution" after Deconvolutional
Networks, but is
actually the transpose (gradient) of conv2d
rather than an actual
deconvolution.
Args:
value: A 4-D Tensor
of type float
and shape
[batch, height, width, in_channels]
.
filter: A 4-D Tensor
with the same type as value
and shape
[height, width, output_channels, in_channels]
. filter
's
in_channels
dimension must match that of value
.
output_shape: A 1-D Tensor
representing the output shape of the
deconvolution op.
strides: A list of ints. The stride of the sliding window for each
dimension of the input tensor.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
See the comment here
name: Optional name for the returned tensor.
Returns:
A Tensor
with the same type as value
.
Raises:
ValueError: If input/output depth does not match filter
's shape, or if
padding is other than 'VALID'
or 'SAME'
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_transpose_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_transpose_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv2d_transpose_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d_transpose`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv2d_transpose_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv2d_transpose_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv2d_transpose_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv2d_transpose`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d(*args, **kwargs)
It accepts the same arguments as tf.nn.conv3d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv3d(x1, *args, **kwargs)
is equivalent to
builder.conv3d(*args, **kwargs)(x1)
tf.nn.conv3d
Computes a 3-D convolution given 5-D `input` and `filter` tensors.
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.
Our Conv3D implements a form of cross-correlation.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [batch, in_depth, in_height, in_width, in_channels]
.
filter: A Tensor
. Must have the same type as input
.
Shape [filter_depth, filter_height, filter_width, in_channels,
out_channels]
. in_channels
must match between input
and filter
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_filter(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_filter(*args, **kwargs)
It accepts the same arguments as tf.nn.conv3d_backprop_filter
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv3d_backprop_filter(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_filter(*args, **kwargs)(x1)
tf.nn.conv3d_backprop_filter
Computes the gradients of 3-D convolution with respect to the filter.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [batch, depth, rows, cols, in_channels]
.
filter: A Tensor
. Must have the same type as input
.
Shape [depth, rows, cols, in_channels, out_channels]
.
in_channels
must match between input
and filter
.
out_backprop: A Tensor
. Must have the same type as input
.
Backprop signal of shape [batch, out_depth, out_rows, out_cols,
out_channels]
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_filter_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_filter_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_filter_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_filter`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_filter_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_filter_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_filter_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_filter`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_filter_v2(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_filter_v2(*args, **kwargs)
It accepts the same arguments as tf.nn.conv3d_backprop_filter_v2
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv3d_backprop_filter_v2(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_filter_v2(*args, **kwargs)(x1)
tf.nn.conv3d_backprop_filter_v2
Computes the gradients of 3-D convolution with respect to the filter.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [batch, depth, rows, cols, in_channels]
.
filter_sizes: A Tensor
of type int32
.
An integer vector representing the tensor shape of filter
,
where filter
is a 5-D
[filter_depth, filter_height, filter_width, in_channels, out_channels]
tensor.
out_backprop: A Tensor
. Must have the same type as input
.
Backprop signal of shape [batch, out_depth, out_rows, out_cols,
out_channels]
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_filter_v2_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_filter_v2_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_filter_v2_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_filter_v2`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_filter_v2_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_filter_v2_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_filter_v2_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_filter_v2`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_input(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_input(*args, **kwargs)
It accepts the same arguments as tf.nn.conv3d_backprop_input
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv3d_backprop_input(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_input(*args, **kwargs)(x1)
tf.nn.conv3d_backprop_input
Computes the gradients of 3-D convolution with respect to the input.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [batch, depth, rows, cols, in_channels]
.
filter: A Tensor
. Must have the same type as input
.
Shape [depth, rows, cols, in_channels, out_channels]
.
in_channels
must match between input
and filter
.
out_backprop: A Tensor
. Must have the same type as input
.
Backprop signal of shape [batch, out_depth, out_rows, out_cols,
out_channels]
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_input_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_input_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_input_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_input`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_input_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_input_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_input_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_input`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_input_v2(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_input_v2(*args, **kwargs)
It accepts the same arguments as tf.nn.conv3d_backprop_input_v2
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv3d_backprop_input_v2(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_input_v2(*args, **kwargs)(x1)
tf.nn.conv3d_backprop_input_v2
Computes the gradients of 3-D convolution with respect to the input.
Args:
input_sizes: A Tensor
of type int32
.
An integer vector representing the tensor shape of input
,
where input
is a 5-D
[batch, depth, rows, cols, in_channels]
tensor.
filter: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [depth, rows, cols, in_channels, out_channels]
.
in_channels
must match between input
and filter
.
out_backprop: A Tensor
. Must have the same type as filter
.
Backprop signal of shape [batch, out_depth, out_rows, out_cols,
out_channels]
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as filter
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_input_v2_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_input_v2_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_input_v2_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_input_v2`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_backprop_input_v2_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_backprop_input_v2_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv3d_backprop_input_v2_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_backprop_input_v2`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv3d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv3d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_transpose(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_transpose(*args, **kwargs)
It accepts the same arguments as tf.nn.conv3d_transpose
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.conv3d_transpose(x1, *args, **kwargs)
is equivalent to
builder.conv3d_transpose(*args, **kwargs)(x1)
tf.nn.conv3d_transpose
The transpose of `conv3d`.
This operation is sometimes called "deconvolution" after Deconvolutional
Networks, but is
actually the transpose (gradient) of conv3d
rather than an actual
deconvolution.
Args:
value: A 5-D Tensor
of type float
and shape
[batch, depth, height, width, in_channels]
.
filter: A 5-D Tensor
with the same type as value
and shape
[depth, height, width, output_channels, in_channels]
. filter
's
in_channels
dimension must match that of value
.
output_shape: A 1-D Tensor
representing the output shape of the
deconvolution op.
strides: A list of ints. The stride of the sliding window for each
dimension of the input tensor.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
See the comment here
name: Optional name for the returned tensor.
Returns:
A Tensor
with the same type as value
.
Raises:
ValueError: If input/output depth does not match filter
's shape, or if
padding is other than 'VALID'
or 'SAME'
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_transpose_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_transpose_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.conv3d_transpose_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_transpose`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def conv3d_transpose_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.conv3d_transpose_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.conv3d_transpose_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.conv3d_transpose`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def convert_to_tensor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.convert_to_tensor(*args, **kwargs)
It accepts the same arguments as tensorflow.convert_to_tensor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.convert_to_tensor(x1, *args, **kwargs)
is equivalent to
builder.convert_to_tensor(*args, **kwargs)(x1)
tensorflow.convert_to_tensor
Converts the given `value` to a `Tensor`.
This function converts Python objects of various types to Tensor
objects. It accepts Tensor
objects, numpy arrays, Python lists,
and Python scalars. For example:
```python import numpy as np
def my_func(arg): arg = tf.convert_to_tensor(arg, dtype=tf.float32) return tf.matmul(arg, arg) + arg
The following calls are equivalent.
value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) ```
This function can be useful when composing a new operation in Python
(such as my_func
in the example above). All standard Python op
constructors apply this function to each of their Tensor-valued
inputs, which allows those ops to accept numpy arrays, Python lists,
and scalars in addition to Tensor
objects.
Args:
value: An object whose type has a registered Tensor
conversion function.
dtype: Optional element type for the returned tensor. If missing, the
type is inferred from the type of value
.
name: Optional name to use if a new Tensor
is created.
as_ref: True if we want the result as a ref tensor. Only used if a new
Tensor
is created.
preferred_dtype: Optional element type for the returned tensor,
used when dtype is None. In some cases, a caller may not have a
dtype in mind when converting to a tensor, so preferred_dtype
can be used as a soft preference. If the conversion to
preferred_dtype
is not possible, this argument has no effect.
Returns:
A Tensor
based on value
.
Raises:
TypeError: If no conversion function is registered for value
.
RuntimeError: If a registered conversion function returns an invalid value.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def convert_to_tensor_or_indexed_slices(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.convert_to_tensor_or_indexed_slices(*args, **kwargs)
It accepts the same arguments as tensorflow.convert_to_tensor_or_indexed_slices
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.convert_to_tensor_or_indexed_slices(x1, *args, **kwargs)
is equivalent to
builder.convert_to_tensor_or_indexed_slices(*args, **kwargs)(x1)
tensorflow.convert_to_tensor_or_indexed_slices
Converts the given object to a `Tensor` or an `IndexedSlices`.
If value
is an IndexedSlices
or SparseTensor
it is returned
unmodified. Otherwise, it is converted to a Tensor
using
convert_to_tensor()
.
Args:
value: An IndexedSlices
, SparseTensor
, or an object that can be consumed
by convert_to_tensor()
.
dtype: (Optional.) The required DType
of the returned Tensor
or
IndexedSlices
.
name: (Optional.) A name to use if a new Tensor
is created.
as_ref: True if the caller wants the results as ref tensors.
Returns:
An Tensor
, IndexedSlices
, or SparseTensor
based on value
.
Raises:
ValueError: If dtype
does not match the element type of value
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def convolution2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.convolution2d(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.convolution2d(*args, **kwargs)(x1)
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cos(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cos(*args, **kwargs)
It accepts the same arguments as tensorflow.cos
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cos(x1, *args, **kwargs)
is equivalent to
builder.cos(*args, **kwargs)(x1)
tensorflow.cos
Computes cos of x element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def count_up_to(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.count_up_to(*args, **kwargs)
It accepts the same arguments as tensorflow.count_up_to
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.count_up_to(x1, *args, **kwargs)
is equivalent to
builder.count_up_to(*args, **kwargs)(x1)
tensorflow.count_up_to
Increments 'ref' until it reaches 'limit'.
This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the updated value.
Args:
ref: A mutable Tensor
. Must be one of the following types: int32
, int64
.
Should be from a scalar Variable
node.
limit: An int
.
If incrementing ref would bring it above limit, instead generates an
'OutOfRange' error.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as ref
.
A copy of the input before increment. If nothing else modifies the
input, the values produced will all be distinct.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def create_partitioned_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.create_partitioned_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.create_partitioned_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.create_partitioned_variables(x1, *args, **kwargs)
is equivalent to
builder.create_partitioned_variables(*args, **kwargs)(x1)
tensorflow.create_partitioned_variables
Create a list of partitioned variables according to the given `slicing`.
Currently only one dimension of the full variable can be sliced, and the full variable can be reconstructed by the concatenation of the returned list along that dimension.
Args:
shape: List of integers. The shape of the full variable.
slicing: List of integers. How to partition the variable.
Must be of the same length as shape
. Each value
indicate how many slices to create in the corresponding
dimension. Presently only one of the values can be more than 1;
that is, the variable can only be sliced along one dimension.
For convenience, The requested number of partitions does not have to divide the corresponding dimension evenly. If it does not, the shapes of the partitions are incremented by 1 starting from partition 0 until all slack is absorbed. The adjustment rules may change in the future, but as you can save/restore these variables with different slicing specifications this should not be a problem.
initializer: A Tensor
of shape shape
or a variable initializer
function. If a function, it will be called once for each slice,
passing the shape and data type of the slice as parameters. The
function must return a tensor with the same shape as the slice.
dtype: Type of the variables. Ignored if initializer
is a Tensor
.
trainable: If True also add all the variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
.
collections: List of graph collections keys to add the variables to.
Defaults to [GraphKeys.VARIABLES]
.
name: Optional name for the full variable. Defaults to
"PartitionedVariable"
and gets uniquified automatically.
reuse: Boolean or None
; if True
and name is set, it would reuse
previously created variables. if False
it will create new variables.
if None
, it would inherit the parent scope reuse.
Returns: A list of Variables corresponding to the slicing.
Raises: ValueError: If any of the arguments is malformed.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def crelu(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.crelu(*args, **kwargs)
It accepts the same arguments as tf.nn.crelu
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.crelu(x1, *args, **kwargs)
is equivalent to
builder.crelu(*args, **kwargs)(x1)
tf.nn.crelu
Computes Concatenated ReLU.
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: https://arxiv.org/abs/1603.05201
Args:
features: A Tensor
with type float
, double
, int32
, int64
, uint8
,
int16
, or int8
.
name: A name for the operation (optional).
Returns:
A Tensor
with the same type as features
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def crelu_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.crelu_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.crelu_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.crelu`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def crelu_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.crelu_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.crelu_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.crelu`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cross(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cross(*args, **kwargs)
It accepts the same arguments as tensorflow.cross
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cross(x1, *args, **kwargs)
is equivalent to
builder.cross(*args, **kwargs)(x1)
tensorflow.cross
Compute the pairwise cross product.
a
and b
must be the same shape; they can either be simple 3-element vectors,
or any shape where the innermost dimension is 3. In the latter case, each pair
of corresponding 3-element vectors is cross-multiplied independently.
Args:
a: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
A tensor containing 3-element vectors.
b: A Tensor
. Must have the same type as a
.
Another tensor, of same type and shape as a
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
Pairwise cross product of the vectors in a
and b
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_beam_search_decoder(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_beam_search_decoder(*args, **kwargs)
It accepts the same arguments as tf.nn.ctc_beam_search_decoder
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.ctc_beam_search_decoder(x1, *args, **kwargs)
is equivalent to
builder.ctc_beam_search_decoder(*args, **kwargs)(x1)
tf.nn.ctc_beam_search_decoder
Performs beam search decoding on the logits given in input.
Note The ctc_greedy_decoder
is a special case of the
ctc_beam_search_decoder
with top_paths=1
(but that decoder is faster
for this special case).
If merge_repeated
is True
, merge repeated classes in the output beams.
This means that if consecutive entries in a beam are the same,
only the first of these is emitted. That is, when the top path
is A B B B B
, the return value is:
A B
ifmerge_repeated = True
.A B B B B
ifmerge_repeated = False
.
Args:
inputs: 3-D float
Tensor
, size
[max_time x batch_size x num_classes]
. The logits.
sequence_length: 1-D int32
vector containing sequence lengths,
having size [batch_size]
.
beam_width: An int scalar >= 0 (beam search beam width).
top_paths: An int scalar >= 0, <= beam_width (controls output size).
merge_repeated: Boolean. Default: True.
Returns:
A tuple (decoded, log_probabilities)
where
decoded: A list of length top_paths, where decoded[j]
is a SparseTensor
containing the decoded outputs:
decoded[j].indices
: Indices matrix (total_decoded_outputs[j] x 2)
The rows store: [batch, time].
decoded[j].values
: Values vector, size (total_decoded_outputs[j])
.
The vector stores the decoded classes for beam j.
decoded[j].shape
: Shape vector, size (2)
.
The shape values are: [batch_size, max_decoded_length[j]]
.
log_probability: A float
matrix (batch_size x top_paths)
containing
sequence log-probabilities.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_beam_search_decoder_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_beam_search_decoder_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.ctc_beam_search_decoder_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.ctc_beam_search_decoder`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_beam_search_decoder_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_beam_search_decoder_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.ctc_beam_search_decoder_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.ctc_beam_search_decoder`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_greedy_decoder(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_greedy_decoder(*args, **kwargs)
It accepts the same arguments as tf.nn.ctc_greedy_decoder
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.ctc_greedy_decoder(x1, *args, **kwargs)
is equivalent to
builder.ctc_greedy_decoder(*args, **kwargs)(x1)
tf.nn.ctc_greedy_decoder
Performs greedy decoding on the logits given in input (best path).
Note: Regardless of the value of merge_repeated, if the maximum index of a
given time and batch corresponds to the blank index (num_classes - 1)
, no
new element is emitted.
If merge_repeated
is True
, merge repeated classes in output.
This means that if consecutive logits' maximum indices are the same,
only the first of these is emitted. The sequence A B B * B * B
(where '*'
is the blank label) becomes
A B
ifmerge_repeated=True
.A B B B B B
ifmerge_repeated=False
.
Args:
inputs: 3-D float
Tensor
sized
[max_time x batch_size x num_classes]
. The logits.
sequence_length: 1-D int32
vector containing sequence lengths,
having size [batch_size]
.
merge_repeated: Boolean. Default: True.
Returns:
A tuple (decoded, log_probabilities)
where
decoded: A single-element list. decoded[0]
is an SparseTensor
containing the decoded outputs s.t.:
decoded.indices
: Indices matrix (total_decoded_outputs x 2)
.
The rows store: [batch, time]
.
decoded.values
: Values vector, size (total_decoded_outputs)
.
The vector stores the decoded classes.
decoded.shape
: Shape vector, size (2)
.
The shape values are: [batch_size, max_decoded_length]
log_probability: A float
matrix (batch_size x 1)
containing sequence
log-probabilities.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_greedy_decoder_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_greedy_decoder_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.ctc_greedy_decoder_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.ctc_greedy_decoder`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_greedy_decoder_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_greedy_decoder_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.ctc_greedy_decoder_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.ctc_greedy_decoder`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_loss(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_loss(*args, **kwargs)
It accepts the same arguments as tf.nn.ctc_loss
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.ctc_loss(x1, *args, **kwargs)
is equivalent to
builder.ctc_loss(*args, **kwargs)(x1)
tf.nn.ctc_loss
Computes the CTC (Connectionist Temporal Classification) Loss.
This op implements the CTC loss as presented in the article:
A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.
http://www.cs.toronto.edu/~graves/icml_2006.pdf
Input requirements:
``` sequence_length(b) <= time for all b
max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. ```
Notes:
This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.
The inputs
Tensor's innermost dimension size, num_classes
, represents
num_labels + 1
classes, where num_labels is the number of true labels, and
the largest value (num_classes - 1)
is reserved for the blank label.
For example, for a vocabulary containing 3 labels [a, b, c]
,
num_classes = 4
and the labels indexing is {a: 0, b: 1, c: 2, blank: 3}
.
Regarding the arguments preprocess_collapse_repeated
and
ctc_merge_repeated
:
If preprocess_collapse_repeated
is True, then a preprocessing step runs
before loss calculation, wherein repeated labels passed to the loss
are merged into single labels. This is useful if the training labels come
from, e.g., forced alignments and therefore have unnecessary repetitions.
If ctc_merge_repeated
is set False, then deep within the CTC calculation,
repeated non-blank labels will not be merged and are interpreted
as individual labels. This is a simplified (non-standard) version of CTC.
Here is a table of the (roughly) expected first order behavior:
preprocess_collapse_repeated=False
,ctc_merge_repeated=True
Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder.
preprocess_collapse_repeated=True
,ctc_merge_repeated=False
Never learns to output repeated classes, as they are collapsed in the input labels before training.
preprocess_collapse_repeated=False
,ctc_merge_repeated=False
Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes.
preprocess_collapse_repeated=True
,ctc_merge_repeated=True
Untested. Very likely will not learn to output repeated classes.
Args:
inputs: 3-D float
Tensor
.
If time_major == False, this will be a Tensor
shaped:
[batch_size x max_time x num_classes]
.
If time_major == True (default), this will be a Tensor
shaped:
[max_time x batch_size x num_classes]
.
The logits.
labels: An int32
SparseTensor
.
labels.indices[i, :] == [b, t]
means labels.values[i]
stores
the id for (batch b, time t).
labels.values[i]
must take on values in [0, num_labels)
.
See core/ops/ctc_ops.cc
for more details.
sequence_length: 1-D int32
vector, size [batch_size]
.
The sequence lengths.
preprocess_collapse_repeated: Boolean. Default: False.
If True, repeated labels are collapsed prior to the CTC calculation.
ctc_merge_repeated: Boolean. Default: True.
time_major: The shape format of the inputs
Tensors.
If True, these Tensors
must be shaped [max_time, batch_size, num_classes]
.
If False, these Tensors
must be shaped [batch_size, max_time, num_classes]
.
Using time_major = True
(default) is a bit more efficient because it avoids
transposes at the beginning of the ctc_loss calculation. However, most
TensorFlow data is batch-major, so by this function also accepts inputs
in batch-major form.
Returns:
A 1-D float
Tensor
, size [batch]
, containing the negative log probabilities.
Raises:
TypeError: if labels is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_loss_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_loss_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.ctc_loss_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.ctc_loss`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ctc_loss_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ctc_loss_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.ctc_loss_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.ctc_loss`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cumprod(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cumprod(*args, **kwargs)
It accepts the same arguments as tensorflow.cumprod
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cumprod(x1, *args, **kwargs)
is equivalent to
builder.cumprod(*args, **kwargs)(x1)
tensorflow.cumprod
Compute the cumulative product of the tensor `x` along `axis`.
By default, this op performs an inclusive cumprod, which means that the
first
element of the input is identical to the first element of the output:
prettyprint
tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]
By setting the exclusive
kwarg to True
, an exclusive cumprod is
performed
instead:
prettyprint
tf.cumprod([a, b, c], exclusive=True) ==> [0, a, a * b]
By setting the reverse
kwarg to True
, the cumprod is performed in the
opposite direction:
prettyprint
tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c]
This is more efficient than using separate tf.reverse
ops.
The reverse
and exclusive
kwargs can also be combined:
prettyprint
tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 0]
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
,
int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
,
complex128
, qint8
, quint8
, qint32
, half
.
axis: A Tensor
of type int32
(default: 0).
reverse: A bool
(default: False).
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def cumsum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.cumsum(*args, **kwargs)
It accepts the same arguments as tensorflow.cumsum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.cumsum(x1, *args, **kwargs)
is equivalent to
builder.cumsum(*args, **kwargs)(x1)
tensorflow.cumsum
Compute the cumulative sum of the tensor `x` along `axis`.
By default, this op performs an inclusive cumsum, which means that the first
element of the input is identical to the first element of the output:
prettyprint
tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
By setting the exclusive
kwarg to True
, an exclusive cumsum is performed
instead:
prettyprint
tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
By setting the reverse
kwarg to True
, the cumsum is performed in the
opposite direction:
prettyprint
tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c]
This is more efficient than using separate tf.reverse
ops.
The reverse
and exclusive
kwargs can also be combined:
prettyprint
tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
,
int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
,
complex128
, qint8
, quint8
, qint32
, half
.
axis: A Tensor
of type int32
(default: 0).
reverse: A bool
(default: False).
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def data(
self, *args, **kwargs)
def data(self, *args, **kwargs): return Data(*args, **kwargs)
def decode_base64(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.decode_base64(*args, **kwargs)
It accepts the same arguments as tensorflow.decode_base64
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.decode_base64(x1, *args, **kwargs)
is equivalent to
builder.decode_base64(*args, **kwargs)(x1)
tensorflow.decode_base64
Decode web-safe base64-encoded strings.
Input may or may not have padding at the end. See EncodeBase64 for padding. Web-safe means that input must use - and _ instead of + and /.
Args:
input: A Tensor
of type string
. Base64 strings to decode.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
. Decoded strings.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def decode_csv(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.decode_csv(*args, **kwargs)
It accepts the same arguments as tensorflow.decode_csv
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.decode_csv(x1, *args, **kwargs)
is equivalent to
builder.decode_csv(*args, **kwargs)(x1)
tensorflow.decode_csv
Convert CSV records to tensors. Each column maps to one tensor.
RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.
Args:
records: A Tensor
of type string
.
Each string is a record/row in the csv and all records should have
the same format.
record_defaults: A list of Tensor
objects with types from: float32
, int32
, int64
, string
.
One tensor per column of the input record, with either a
scalar default value for that column or empty if the column is required.
field_delim: An optional string
. Defaults to ","
.
delimiter to separate fields in a record.
name: A name for the operation (optional).
Returns:
A list of Tensor
objects. Has the same type as record_defaults
.
Each tensor will have the same shape as records.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def decode_json_example(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.decode_json_example(*args, **kwargs)
It accepts the same arguments as tensorflow.decode_json_example
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.decode_json_example(x1, *args, **kwargs)
is equivalent to
builder.decode_json_example(*args, **kwargs)(x1)
tensorflow.decode_json_example
Convert JSON-encoded Example records to binary protocol buffer strings.
This op translates a tensor containing Example records, encoded using the standard JSON mapping, into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.
Args:
json_examples: A Tensor
of type string
.
Each string is a JSON object serialized according to the JSON
mapping of the Example proto.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
.
Each string is a binary Example protocol buffer corresponding
to the respective element of json_examples
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def decode_raw(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.decode_raw(*args, **kwargs)
It accepts the same arguments as tensorflow.decode_raw
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.decode_raw(x1, *args, **kwargs)
is equivalent to
builder.decode_raw(*args, **kwargs)(x1)
tensorflow.decode_raw
Reinterpret the bytes of a string as a vector of numbers.
Args:
bytes: A Tensor
of type string
.
All the elements must have the same length.
out_type: A tf.DType
from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64
.
little_endian: An optional bool
. Defaults to True
.
Whether the input bytes
are in little-endian order.
Ignored for out_type
values that are stored in a single byte like
uint8
.
name: A name for the operation (optional).
Returns:
A Tensor
of type out_type
.
A Tensor with one more dimension than the input bytes
. The
added dimension will have size equal to the length of the elements
of bytes
divided by the number of bytes to represent out_type
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def delete_session_tensor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.delete_session_tensor(*args, **kwargs)
It accepts the same arguments as tensorflow.delete_session_tensor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.delete_session_tensor(x1, *args, **kwargs)
is equivalent to
builder.delete_session_tensor(*args, **kwargs)(x1)
tensorflow.delete_session_tensor
Delete the tensor for the given tensor handle.
This is EXPERIMENTAL and subject to change.
Delete the tensor of a given tensor handle. The tensor is produced in a previous run() and stored in the state of the session.
Args: handle: The string representation of a persistent tensor handle. name: Optional name prefix for the return tensor.
Returns: A pair of graph elements. The first is a placeholder for feeding a tensor handle and the second is a deletion operation.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depth_to_space(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depth_to_space(*args, **kwargs)
It accepts the same arguments as tensorflow.depth_to_space
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.depth_to_space(x1, *args, **kwargs)
is equivalent to
builder.depth_to_space(*args, **kwargs)(x1)
tensorflow.depth_to_space
DepthToSpace for tensors of type T.
Rearranges data from depth into blocks of spatial data.
This is the reverse transformation of SpaceToDepth. More specifically,
this op outputs a copy of the input tensor where values from the depth
dimension are moved in spatial blocks to the height
and width
dimensions.
The attr block_size
indicates the input block size and how the data is moved.
- Chunks of data of size
block_size * block_size
from depth are rearranged into non-overlapping blocks of sizeblock_size x block_size
- The width the output tensor is
input_depth * block_size
, whereas the height isinput_height * block_size
. - The depth of the input tensor must be divisible by
block_size * block_size
.
That is, assuming the input is in the shape:
[batch, height, width, depth]
,
the shape of the output will be:
[batch, height*block_size, width*block_size, depth/(block_size*block_size)]
This operation requires that the input tensor be of rank 4, and that
block_size
be >=1 and that block_size * block_size
be a divisor of the
input depth.
This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.
For example, given this input of shape [1, 1, 1, 4]
, and a block size of 2:
```prettyprint x = [[[[1, 2, 3, 4]]]]
```
This operation will output a tensor of shape [1, 2, 2, 1]
:
prettyprint
[[[[1], [2]],
[[3], [4]]]]
Here, the input has a batch of 1 and each batch element has shape [1, 1, 4]
,
the corresponding output will have 2x2 elements and will have a depth of
1 channel (1 = 4 / (block_size * block_size)
).
The output element shape is [2, 2, 1]
.
For an input tensor with larger depth, here of shape [1, 1, 1, 12]
, e.g.
prettyprint
x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
This operation, for block size of 2, will return the following tensor of shape
[1, 2, 2, 3]
```prettyprint [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]
```
Similarly, for the following input of shape [1 2 2 4]
, and a block size of 2:
prettyprint
x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
the operator will return the following tensor of shape [1 4 4 1]
:
```prettyprint x = [[ [1], [2], [5], [6]], [ [3], [4], [7], [8]], [ [9], [10], [13], [14]], [ [11], [12], [15], [16]]]
```
Args:
input: A Tensor
.
block_size: An int
that is >= 2
.
The size of the spatial block, same as in Space2Depth.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d(*args, **kwargs)
It accepts the same arguments as tf.nn.depthwise_conv2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.depthwise_conv2d(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d(*args, **kwargs)(x1)
tf.nn.depthwise_conv2d
Depthwise 2-D convolution.
Given an input tensor of shape [batch, in_height, in_width, in_channels]
and a filter tensor of shape
[filter_height, filter_width, in_channels, channel_multiplier]
containing in_channels
convolutional filters of depth 1, depthwise_conv2d
applies a different filter to each input channel (expanding from 1 channel
to channel_multiplier
channels for each), then concatenates the results
together. The output has in_channels * channel_multiplier
channels.
In detail,
output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q]
Must have strides[0] = strides[3] = 1
. For the most common case of the
same horizontal and vertical strides, strides = [1, stride, stride, 1]
.
Args:
input: 4-D with shape [batch, in_height, in_width, in_channels]
.
filter: 4-D with shape
[filter_height, filter_width, in_channels, channel_multiplier]
.
strides: 1-D of size 4. The stride of the sliding window for each
dimension of input
.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
See the comment
here
name: A name for this operation (optional).
Returns:
A 4-D Tensor
of shape
[batch, out_height, out_width, in_channels * channel_multiplier].
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native(*args, **kwargs)
It accepts the same arguments as tf.nn.depthwise_conv2d_native
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.depthwise_conv2d_native(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native(*args, **kwargs)(x1)
tf.nn.depthwise_conv2d_native
Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.
Given an input tensor of shape [batch, in_height, in_width, in_channels]
and a filter / kernel tensor of shape
[filter_height, filter_width, in_channels, channel_multiplier]
, containing
in_channels
convolutional filters of depth 1, depthwise_conv2d
applies
a different filter to each input channel (expanding from 1 channel to
channel_multiplier
channels for each), then concatenates the results
together. Thus, the output has in_channels * channel_multiplier
channels.
for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q]
Must have strides[0] = strides[3] = 1
. For the most common case of the same
horizontal and vertices strides, strides = [1, stride, stride, 1]
.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
.
filter: A Tensor
. Must have the same type as input
.
strides: A list of ints
.
1-D of length 4. The stride of the sliding window for each dimension
of input
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_backprop_filter(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_backprop_filter(*args, **kwargs)
It accepts the same arguments as tf.nn.depthwise_conv2d_native_backprop_filter
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.depthwise_conv2d_native_backprop_filter(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_backprop_filter(*args, **kwargs)(x1)
tf.nn.depthwise_conv2d_native_backprop_filter
Computes the gradients of depthwise convolution with respect to the filter.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
.
4-D with shape [batch, in_height, in_width, in_channels]
.
filter_sizes: A Tensor
of type int32
.
An integer vector representing the tensor shape of filter
,
where filter
is a 4-D
[filter_height, filter_width, in_channels, depthwise_multiplier]
tensor.
out_backprop: A Tensor
. Must have the same type as input
.
4-D with shape [batch, out_height, out_width, out_channels]
.
Gradients w.r.t. the output of the convolution.
strides: A list of ints
.
The stride of the sliding window for each dimension of the input
of the convolution.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. 4-D with shape
[filter_height, filter_width, in_channels, out_channels]
. Gradient w.r.t.
the filter
input of the convolution.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_backprop_filter_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_backprop_filter_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_backprop_filter_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d_native_backprop_filter`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_backprop_filter_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_backprop_filter_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_backprop_filter_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d_native_backprop_filter`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_backprop_input(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_backprop_input(*args, **kwargs)
It accepts the same arguments as tf.nn.depthwise_conv2d_native_backprop_input
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.depthwise_conv2d_native_backprop_input(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_backprop_input(*args, **kwargs)(x1)
tf.nn.depthwise_conv2d_native_backprop_input
Computes the gradients of depthwise convolution with respect to the input.
Args:
input_sizes: A Tensor
of type int32
.
An integer vector representing the shape of input
,
where input
is a 4-D [batch, height, width, channels]
tensor.
filter: A Tensor
. Must be one of the following types: float32
, float64
.
4-D with shape
[filter_height, filter_width, in_channels, depthwise_multiplier]
.
out_backprop: A Tensor
. Must have the same type as filter
.
4-D with shape [batch, out_height, out_width, out_channels]
.
Gradients w.r.t. the output of the convolution.
strides: A list of ints
.
The stride of the sliding window for each dimension of the input
of the convolution.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as filter
.
4-D with shape [batch, in_height, in_width, in_channels]
. Gradient
w.r.t. the input of the convolution.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_backprop_input_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_backprop_input_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_backprop_input_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d_native_backprop_input`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_backprop_input_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_backprop_input_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_backprop_input_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d_native_backprop_input`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d_native`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def depthwise_conv2d_native_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.depthwise_conv2d_native_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.depthwise_conv2d_native_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.depthwise_conv2d_native`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def deserialize_many_sparse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.deserialize_many_sparse(*args, **kwargs)
It accepts the same arguments as tensorflow.deserialize_many_sparse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.deserialize_many_sparse(x1, *args, **kwargs)
is equivalent to
builder.deserialize_many_sparse(*args, **kwargs)(x1)
tensorflow.deserialize_many_sparse
Deserialize and concatenate `SparseTensors` from a serialized minibatch.
The input serialized_sparse
must be a string matrix of shape [N x 3]
where
N
is the minibatch size and the rows correspond to packed outputs of
serialize_sparse
. The ranks of the original SparseTensor
objects
must all match. When the final SparseTensor
is created, it has rank one
higher than the ranks of the incoming SparseTensor
objects (they have been
concatenated along a new row dimension).
The output SparseTensor
object's shape values for all dimensions but the
first are the max across the input SparseTensor
objects' shape values
for the corresponding dimensions. Its first shape value is N
, the minibatch
size.
The input SparseTensor
objects' indices are assumed ordered in
standard lexicographic order. If this is not the case, after this
step run sparse_reorder
to restore index ordering.
For example, if the serialized input is a [2, 3]
matrix representing two
original SparseTensor
objects:
index = [ 0] [10] [20] values = [1, 2, 3] shape = [50]
and
index = [ 2] [10] values = [4, 5] shape = [30]
then the final deserialized SparseTensor
will be:
index = [0 0] [0 10] [0 20] [1 2] [1 10] values = [1, 2, 3, 4, 5] shape = [2 50]
Args:
serialized_sparse: 2-D Tensor
of type string
of shape [N, 3]
.
The serialized and packed SparseTensor
objects.
dtype: The dtype
of the serialized SparseTensor
objects.
rank: (optional) Python int, the rank of the SparseTensor
objects.
name: A name prefix for the returned tensors (optional)
Returns:
A SparseTensor
representing the deserialized SparseTensor
s,
concatenated along the SparseTensor
s' first dimension.
All of the serialized SparseTensor
s must have had the same rank and type.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def device(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.device(*args, **kwargs)
It accepts the same arguments as tensorflow.device
.
However, a partial with the arguments is returned which expects any argument x
and complete ignores it, such that
tensorflow.device(*args, **kwargs)
is equivalent to
builder.device(*args, **kwargs)(x)
tensorflow.device
Wrapper for `Graph.device()` using the default graph.
See
Graph.device()
for more details.
Args: device_name_or_function: The device name or function to use in the context.
Returns: A context manager that specifies the default device to use for newly created ops.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs)
def diag(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.diag(*args, **kwargs)
It accepts the same arguments as tensorflow.diag
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.diag(x1, *args, **kwargs)
is equivalent to
builder.diag(*args, **kwargs)(x1)
tensorflow.diag
Returns a diagonal tensor with a given diagonal values.
Given a diagonal
, this operation returns a tensor with the diagonal
and
everything else padded with zeros. The diagonal is computed as follows:
Assume diagonal
has dimensions [D1,..., Dk], then the output is a tensor of
rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]
and 0 everywhere else.
For example:
```prettyprint
'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]] ```
Args:
diagonal: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, complex64
, complex128
.
Rank k tensor where k is at most 3.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as diagonal
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def diag_part(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.diag_part(*args, **kwargs)
It accepts the same arguments as tensorflow.diag_part
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.diag_part(x1, *args, **kwargs)
is equivalent to
builder.diag_part(*args, **kwargs)(x1)
tensorflow.diag_part
Returns the diagonal part of the tensor.
This operation returns a tensor with the diagonal
part
of the input
. The diagonal
part is computed as follows:
Assume input
has dimensions [D1,..., Dk, D1,..., Dk]
, then the output is a
tensor of rank k
with dimensions [D1,..., Dk]
where:
diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]
.
For example:
```prettyprint
'input' is [[1, 0, 0, 0]
[0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]]
tf.diag_part(input) ==> [1, 2, 3, 4] ```
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, complex64
, complex128
.
Rank k tensor where k is 2, 4, or 6.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. The extracted diagonal.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def digamma(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.digamma(*args, **kwargs)
It accepts the same arguments as tensorflow.digamma
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.digamma(x1, *args, **kwargs)
is equivalent to
builder.digamma(*args, **kwargs)(x1)
tensorflow.digamma
Computes Psi, the derivative of Lgamma (the log of the absolute value of
Gamma(x)
), element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d(*args, **kwargs)
It accepts the same arguments as tf.nn.dilation2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.dilation2d(x1, *args, **kwargs)
is equivalent to
builder.dilation2d(*args, **kwargs)(x1)
tf.nn.dilation2d
Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.
The input
tensor has shape [batch, in_height, in_width, depth]
and the
filter
tensor has shape [filter_height, filter_width, depth]
, i.e., each
input channel is processed independently of the others with its own structuring
function. The output
tensor has shape
[batch, out_height, out_width, depth]
. The spatial dimensions of the output
tensor depend on the padding
algorithm. We currently only support the default
"NHWC" data_format
.
In detail, the grayscale morphological 2-D dilation is the max-sum correlation
(for consistency with conv2d
, we use unmirrored filters):
output[b, y, x, c] = max_{dy, dx} input[b, strides[1] * y + rates[1] * dy, strides[2] * x + rates[2] * dx, c] + filter[dy, dx, c]
Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.
Note on duality: The dilation of input
by the filter
is equal to the
negation of the erosion of -input
by the reflected filter
.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
4-D with shape [batch, in_height, in_width, depth]
.
filter: A Tensor
. Must have the same type as input
.
3-D with shape [filter_height, filter_width, depth]
.
strides: A list of ints
that has length >= 4
.
The stride of the sliding window for each dimension of the input
tensor. Must be: [1, stride_height, stride_width, 1]
.
rates: A list of ints
that has length >= 4
.
The input stride for atrous morphological dilation. Must be:
[1, rate_height, rate_width, 1]
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
4-D with shape [batch, out_height, out_width, depth]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_backprop_filter(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_backprop_filter(*args, **kwargs)
It accepts the same arguments as tf.nn.dilation2d_backprop_filter
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.dilation2d_backprop_filter(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_backprop_filter(*args, **kwargs)(x1)
tf.nn.dilation2d_backprop_filter
Computes the gradient of morphological 2-D dilation with respect to the filter.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
4-D with shape [batch, in_height, in_width, depth]
.
filter: A Tensor
. Must have the same type as input
.
3-D with shape [filter_height, filter_width, depth]
.
out_backprop: A Tensor
. Must have the same type as input
.
4-D with shape [batch, out_height, out_width, depth]
.
strides: A list of ints
that has length >= 4
.
1-D of length 4. The stride of the sliding window for each dimension of
the input tensor. Must be: [1, stride_height, stride_width, 1]
.
rates: A list of ints
that has length >= 4
.
1-D of length 4. The input stride for atrous morphological dilation.
Must be: [1, rate_height, rate_width, 1]
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
3-D with shape [filter_height, filter_width, depth]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_backprop_filter_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_backprop_filter_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_backprop_filter_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dilation2d_backprop_filter`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_backprop_filter_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_backprop_filter_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_backprop_filter_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dilation2d_backprop_filter`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_backprop_input(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_backprop_input(*args, **kwargs)
It accepts the same arguments as tf.nn.dilation2d_backprop_input
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.dilation2d_backprop_input(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_backprop_input(*args, **kwargs)(x1)
tf.nn.dilation2d_backprop_input
Computes the gradient of morphological 2-D dilation with respect to the input.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
4-D with shape [batch, in_height, in_width, depth]
.
filter: A Tensor
. Must have the same type as input
.
3-D with shape [filter_height, filter_width, depth]
.
out_backprop: A Tensor
. Must have the same type as input
.
4-D with shape [batch, out_height, out_width, depth]
.
strides: A list of ints
that has length >= 4
.
1-D of length 4. The stride of the sliding window for each dimension of
the input tensor. Must be: [1, stride_height, stride_width, 1]
.
rates: A list of ints
that has length >= 4
.
1-D of length 4. The input stride for atrous morphological dilation.
Must be: [1, rate_height, rate_width, 1]
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
4-D with shape [batch, in_height, in_width, depth]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_backprop_input_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_backprop_input_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_backprop_input_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dilation2d_backprop_input`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_backprop_input_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_backprop_input_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_backprop_input_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dilation2d_backprop_input`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dilation2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dilation2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dilation2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.dilation2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dilation2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def div(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.div(*args, **kwargs)
It accepts the same arguments as tensorflow.div
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.div(x1, *args, **kwargs)
is equivalent to
builder.div(*args, **kwargs)(x1)
tensorflow.div
Returns x / y element-wise.
NOTE: Div
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, uint8
, int8
, uint16
, int16
, int32
, int64
, complex64
, complex128
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def drop_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.drop_layer(*args, **kwargs)
It accepts the same arguments as tb.drop_layer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.drop_layer(x1, *args, **kwargs)
is equivalent to
builder.drop_layer(*args, **kwargs)(x1)
tb.drop_layer
Computes dropout.
With probability keep_prob
, outputs the input element scaled up by
1 / keep_prob
, otherwise outputs 0
. The scaling is so that the expected
sum is unchanged.
Args:
x: A tensor.
keep_prob: A scalar Tensor
with the same type as x. The probability
that each element is kept.
noise_shape: A 1-D Tensor
of type int32
, representing the
shape for randomly generated keep/drop flags.
seed: A Python integer. Used to create random seeds. See
set_random_seed
for behavior.
name: A name for this operation (optional).
Returns:
A Tensor of the same shape of x
.
Raises:
ValueError: If keep_prob
is not in (0, 1]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dropout(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dropout(*args, **kwargs)
It accepts the same arguments as tf.nn.dropout
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.dropout(x1, *args, **kwargs)
is equivalent to
builder.dropout(*args, **kwargs)(x1)
tf.nn.dropout
Computes dropout.
With probability keep_prob
, outputs the input element scaled up by
1 / keep_prob
, otherwise outputs 0
. The scaling is so that the expected
sum is unchanged.
By default, each element is kept or dropped independently. If noise_shape
is specified, it must be
broadcastable
to the shape of x
, and only dimensions with noise_shape[i] == shape(x)[i]
will make independent decisions. For example, if shape(x) = [k, l, m, n]
and noise_shape = [k, 1, 1, n]
, each batch and channel component will be
kept independently and each row and column will be kept or not kept together.
Args:
x: A tensor.
keep_prob: A scalar Tensor
with the same type as x. The probability
that each element is kept.
noise_shape: A 1-D Tensor
of type int32
, representing the
shape for randomly generated keep/drop flags.
seed: A Python integer. Used to create random seeds. See
set_random_seed
for behavior.
name: A name for this operation (optional).
Returns:
A Tensor of the same shape of x
.
Raises:
ValueError: If keep_prob
is not in (0, 1]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dropout_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dropout_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.dropout_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dropout`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dropout_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dropout_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.dropout_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dropout`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dynamic_partition(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dynamic_partition(*args, **kwargs)
It accepts the same arguments as tensorflow.dynamic_partition
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.dynamic_partition(x1, *args, **kwargs)
is equivalent to
builder.dynamic_partition(*args, **kwargs)(x1)
tensorflow.dynamic_partition
Partitions `data` into `num_partitions` tensors using indices from `partitions`.
For each index tuple js
of size partitions.ndim
, the slice data[js, ...]
becomes part of outputs[partitions[js]]
. The slices with partitions[js] = i
are placed in outputs[i]
in lexicographic order of js
, and the first
dimension of outputs[i]
is the number of entries in partitions
equal to i
.
In detail,
outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
data.shape
must start with partitions.shape
.
For example:
# Scalar partitions partitions = 1 num_partitions = 2 data = [10, 20] outputs[0] = [] # Empty with shape [0, 2] outputs[1] = [[10, 20]] # Vector partitions partitions = [0, 0, 1, 1, 0] num_partitions = 2 data = [10, 20, 30, 40, 50] outputs[0] = [10, 20, 50] outputs[1] = [30, 40]
Args:
data: A Tensor
.
partitions: A Tensor
of type int32
.
Any shape. Indices in the range [0, num_partitions)
.
num_partitions: An int
that is >= 1
.
The number of partitions to output.
name: A name for the operation (optional).
Returns:
A list of num_partitions
Tensor
objects of the same type as data.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dynamic_rnn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dynamic_rnn(*args, **kwargs)
It accepts the same arguments as tf.nn.dynamic_rnn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.dynamic_rnn(x1, *args, **kwargs)
is equivalent to
builder.dynamic_rnn(*args, **kwargs)(x1)
tf.nn.dynamic_rnn
Creates a recurrent neural network specified by RNNCell `cell`.
This function is functionally identical to the function rnn
above, but
performs fully dynamic unrolling of inputs
.
Unlike rnn
, the input inputs
is not a Python list of Tensors
, one for
each frame. Instead, inputs
may be a single Tensor
where
the maximum time is either the first or second dimension (see the parameter
time_major
). Alternatively, it may be a (possibly nested) tuple of
Tensors, each of them having matching batch and time dimensions.
The corresponding output is either a single Tensor
having the same number
of time steps and batch size, or a (possibly nested) tuple of such tensors,
matching the nested structure of cell.output_size
.
The parameter sequence_length
is optional and is used to copy-through state
and zero-out outputs when past a batch element's sequence length. So it's more
for correctness than performance, unlike in rnn().
Args: cell: An instance of RNNCell. inputs: The RNN inputs.
If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time, ...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size, ...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized [batch_size]
.
initial_state: (optional) An initial state for the RNN.
If cell.state_size
is an integer, this must be
a Tensor
of appropriate type and shape [batch_size, cell.state_size]
.
If cell.state_size
is a tuple, this should be a tuple of
tensors having shapes [batch_size, s] for s in cell.state_size
.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency
and can be run in parallel, will be. This parameter trades off
time for space. Values >> 1 use more memory but take less time,
while smaller values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the inputs
and outputs
Tensors.
If true, these Tensors
must be shaped [max_time, batch_size, depth]
.
If false, these Tensors
must be shaped [batch_size, max_time, depth]
.
Using time_major = True
is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "RNN".
Returns: A pair (outputs, state) where:
outputs: The RNN output `Tensor`. If time_major == False (default), this will be a `Tensor` shaped: `[batch_size, max_time, cell.output_size]`. If time_major == True, this will be a `Tensor` shaped: `[max_time, batch_size, cell.output_size]`. Note, if `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `outputs` will be a tuple having the same structure as `cell.output_size`, containing Tensors having shapes corresponding to the shape data in `cell.output_size`. state: The final state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes.
Raises:
TypeError: If cell
is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dynamic_rnn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dynamic_rnn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.dynamic_rnn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dynamic_rnn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dynamic_rnn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dynamic_rnn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.dynamic_rnn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.dynamic_rnn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def dynamic_stitch(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.dynamic_stitch(*args, **kwargs)
It accepts the same arguments as tensorflow.dynamic_stitch
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.dynamic_stitch(x1, *args, **kwargs)
is equivalent to
builder.dynamic_stitch(*args, **kwargs)(x1)
tensorflow.dynamic_stitch
Interleave the values from the `data` tensors into a single tensor.
Builds a merged tensor such that
merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
For example, if each indices[m]
is scalar or vector, we have
# Scalar indices merged[indices[m], ...] = data[m][...] # Vector indices merged[indices[m][i], ...] = data[m][i, ...]
Each data[i].shape
must start with the corresponding indices[i].shape
,
and the rest of data[i].shape
must be constant w.r.t. i
. That is, we
must have data[i].shape = indices[i].shape + constant
. In terms of this
constant
, the output shape is
merged.shape = [max(indices)] + constant
Values are merged in order, so if an index appears in both indices[m][i]
and
indices[n][j]
for (m,i) < (n,j)
the slice data[n][j]
will appear in the
merged result.
For example:
indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], [51, 52], [61, 62]]
Args:
indices: A list of at least 1 Tensor
objects of type int32
.
data: A list with the same number of Tensor
objects as indices
of Tensor
objects of the same type.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def edit_distance(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.edit_distance(*args, **kwargs)
It accepts the same arguments as tensorflow.edit_distance
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.edit_distance(x1, *args, **kwargs)
is equivalent to
builder.edit_distance(*args, **kwargs)(x1)
tensorflow.edit_distance
Computes the Levenshtein distance between sequences.
This operation takes variable-length sequences (hypothesis
and truth
),
each provided as a SparseTensor
, and computes the Levenshtein distance.
You can normalize the edit distance by length of truth
by setting
normalize
to true.
For example, given the following input:
```python
'hypothesis' is a tensor of shape [2, 1]
with variable-length values:
(0,0) = ["a"]
(1,0) = ["b"]
hypothesis = tf.SparseTensor( [[0, 0, 0], [1, 0, 0]], ["a", "b"] (2, 1, 1))
'truth' is a tensor of shape [2, 2]
with variable-length values:
(0,0) = []
(0,1) = ["a"]
(1,0) = ["b", "c"]
(1,1) = ["a"]
truth = tf.SparseTensor( [[0, 1, 0], [1, 0, 0], [1, 0, 1], [1, 1, 0]] ["a", "b", "c", "a"], (2, 2, 2))
normalize = True ```
This operation would return the following:
```python
'output' is a tensor of shape [2, 2]
with edit distances normalized
by 'truth' lengths.
output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis ```
Args:
hypothesis: A SparseTensor
containing hypothesis sequences.
truth: A SparseTensor
containing truth sequences.
normalize: A bool
. If True
, normalizes the Levenshtein distance by
length of truth.
name: A name for the operation (optional).
Returns:
A dense Tensor
with rank R - 1
, where R is the rank of the
SparseTensor
inputs hypothesis
and truth
.
Raises:
TypeError: If either hypothesis
or truth
are not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def einsum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.einsum(*args, **kwargs)
It accepts the same arguments as tensorflow.einsum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.einsum(x1, *args, **kwargs)
is equivalent to
builder.einsum(*args, **kwargs)(x1)
tensorflow.einsum
A generalized contraction between tensors of arbitrary dimension.
Like numpy.einsum.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def elu(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.elu(*args, **kwargs)
It accepts the same arguments as tf.nn.elu
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.elu(x1, *args, **kwargs)
is equivalent to
builder.elu(*args, **kwargs)(x1)
tf.nn.elu
Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.
See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Args:
features: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as features
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def elu_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.elu_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.elu_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.elu`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def elu_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.elu_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.elu_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.elu`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def embedding_lookup(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.embedding_lookup(*args, **kwargs)
It accepts the same arguments as tf.nn.embedding_lookup
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.embedding_lookup(x1, *args, **kwargs)
is equivalent to
builder.embedding_lookup(*args, **kwargs)(x1)
tf.nn.embedding_lookup
Looks up `ids` in a list of embedding tensors.
This function is used to perform parallel lookups on the list of
tensors in params
. It is a generalization of
tf.gather()
, where params
is
interpreted as a partition of a larger embedding tensor.
If len(params) > 1
, each element id
of ids
is partitioned between
the elements of params
according to the partition_strategy
.
In all strategies, if the id space does not evenly divide the number of
partitions, each of the first (max_id + 1) % len(params)
partitions will
be assigned one more id.
If partition_strategy
is "mod"
, we assign each id to partition
p = id % len(params)
. For instance,
13 ids are split across 5 partitions as:
[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]
If partition_strategy
is "div"
, we assign ids to partitions in a
contiguous manner. In this case, 13 ids are split across 5 partitions as:
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]
The results of the lookup are concatenated into a dense
tensor. The returned tensor has shape shape(ids) + shape(params)[1:]
.
Args:
params: A list of tensors with the same type and which can be concatenated
along dimension 0. Each Tensor
must be appropriately sized for the given
partition_strategy
.
ids: A Tensor
with type int32
or int64
containing the ids to be looked
up in params
.
partition_strategy: A string specifying the partitioning strategy, relevant
if len(params) > 1
. Currently "div"
and "mod"
are supported. Default
is "mod"
.
name: A name for the operation (optional).
validate_indices: Whether or not to validate gather indices.
Returns:
A Tensor
with the same type as the tensors in params
.
Raises:
ValueError: If params
is empty.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def embedding_lookup_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.embedding_lookup_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.embedding_lookup_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.embedding_lookup`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def embedding_lookup_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.embedding_lookup_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.embedding_lookup_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.embedding_lookup`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def embedding_lookup_sparse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.embedding_lookup_sparse(*args, **kwargs)
It accepts the same arguments as tf.nn.embedding_lookup_sparse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.embedding_lookup_sparse(x1, *args, **kwargs)
is equivalent to
builder.embedding_lookup_sparse(*args, **kwargs)(x1)
tf.nn.embedding_lookup_sparse
Computes embeddings for the given ids and weights.
This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order.
It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Args:
params: A single tensor representing the complete embedding tensor,
or a list of P tensors all of same shape except for the first dimension,
representing sharded embedding tensors.
sp_ids: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
where N is typically batch size and M is arbitrary.
sp_weights: either a SparseTensor of float / double weights, or None to
indicate all weights should be taken to be 1. If specified, sp_weights
must have exactly the same shape and indices as sp_ids.
partition_strategy: A string specifying the partitioning strategy, relevant
if len(params) > 1
. Currently "div"
and "mod"
are supported. Default
is "mod"
. See tf.nn.embedding_lookup
for more details.
name: Optional name for the op.
combiner: A string specifying the reduction op. Currently "mean", "sqrtn"
and "sum" are supported.
"sum" computes the weighted sum of the embedding results for each row.
"mean" is the weighted sum divided by the total weight.
"sqrtn" is the weighted sum divided by the square root of the sum of the
squares of the weights.
Returns: A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by sp_ids, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.
In other words, if shape(combined params) = [p0, p1, ..., pm] and shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn] then shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
[0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0
with combiner="mean", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = params[0, :] * 1.0 output[2, :] = params[1, :] * 3.0
Raises: TypeError: If sp_ids is not a SparseTensor, or if sp_weights is neither None nor SparseTensor. ValueError: If combiner is not one of {"mean", "sqrtn", "sum"}.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def embedding_lookup_sparse_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.embedding_lookup_sparse_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.embedding_lookup_sparse_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.embedding_lookup_sparse`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def embedding_lookup_sparse_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.embedding_lookup_sparse_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.embedding_lookup_sparse_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.embedding_lookup_sparse`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def encode_base64(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.encode_base64(*args, **kwargs)
It accepts the same arguments as tensorflow.encode_base64
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.encode_base64(x1, *args, **kwargs)
is equivalent to
builder.encode_base64(*args, **kwargs)(x1)
tensorflow.encode_base64
Encode strings into web-safe base64 format.
Refer to the following article for more information on base64 format: en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the end so that the encoded has length multiple of 4. See Padding section of the link above.
Web-safe means that the encoder uses - and _ instead of + and /.
Args:
input: A Tensor
of type string
. Strings to be encoded.
pad: An optional bool
. Defaults to False
.
Bool whether padding is applied at the ends.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
. Input strings encoded in base64.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ensamble_dropout(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ensamble_dropout(*args, **kwargs)
It accepts the same arguments as tb.ensamble_dropout
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.ensamble_dropout(x1, *args, **kwargs)
is equivalent to
builder.ensamble_dropout(*args, **kwargs)(x1)
tb.ensamble_dropout
None
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.equal(*args, **kwargs)
It accepts the same arguments as tensorflow.equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.equal(x1, *args, **kwargs)
is equivalent to
builder.equal(*args, **kwargs)(x1)
tensorflow.equal
Returns the truth value of (x == y) element-wise.
NOTE: Equal
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, uint8
, int8
, int16
, int32
, int64
, complex64
, quint8
, qint8
, qint32
, string
, bool
, complex128
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def erf(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.erf(*args, **kwargs)
It accepts the same arguments as tensorflow.erf
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.erf(x1, *args, **kwargs)
is equivalent to
builder.erf(*args, **kwargs)(x1)
tensorflow.erf
Computes the Gauss error function of `x` element-wise.
Args:
x: A Tensor
of SparseTensor
. Must be one of the following types: half
,
float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
, respectively. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def erfc(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.erfc(*args, **kwargs)
It accepts the same arguments as tensorflow.erfc
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.erfc(x1, *args, **kwargs)
is equivalent to
builder.erfc(*args, **kwargs)(x1)
tensorflow.erfc
Computes the complementary error function of `x` element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def erosion2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.erosion2d(*args, **kwargs)
It accepts the same arguments as tf.nn.erosion2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.erosion2d(x1, *args, **kwargs)
is equivalent to
builder.erosion2d(*args, **kwargs)(x1)
tf.nn.erosion2d
Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.
The value
tensor has shape [batch, in_height, in_width, depth]
and the
kernel
tensor has shape [kernel_height, kernel_width, depth]
, i.e.,
each input channel is processed independently of the others with its own
structuring function. The output
tensor has shape
[batch, out_height, out_width, depth]
. The spatial dimensions of the
output tensor depend on the padding
algorithm. We currently only support the
default "NHWC" data_format
.
In detail, the grayscale morphological 2-D erosion is given by:
output[b, y, x, c] = min_{dy, dx} value[b, strides[1] * y - rates[1] * dy, strides[2] * x - rates[2] * dx, c] - kernel[dy, dx, c]
Duality: The erosion of value
by the kernel
is equal to the negation of
the dilation of -value
by the reflected kernel
.
Args:
value: A Tensor
. 4-D with shape [batch, in_height, in_width, depth]
.
kernel: A Tensor
. Must have the same type as value
.
3-D with shape [kernel_height, kernel_width, depth]
.
strides: A list of ints
that has length >= 4
.
1-D of length 4. The stride of the sliding window for each dimension of
the input tensor. Must be: [1, stride_height, stride_width, 1]
.
rates: A list of ints
that has length >= 4
.
1-D of length 4. The input stride for atrous morphological dilation.
Must be: [1, rate_height, rate_width, 1]
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional). If not specified "erosion2d"
is used.
Returns:
A Tensor
. Has the same type as value
.
4-D with shape [batch, out_height, out_width, depth]
.
Raises:
ValueError: If the value
depth does not match kernel
' shape, or if
padding is other than 'VALID'
or 'SAME'
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def erosion2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.erosion2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.erosion2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.erosion2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def erosion2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.erosion2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.erosion2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.erosion2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def exp(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.exp(*args, **kwargs)
It accepts the same arguments as tensorflow.exp
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.exp(x1, *args, **kwargs)
is equivalent to
builder.exp(*args, **kwargs)(x1)
tensorflow.exp
Computes exponential of x element-wise. \\(y = e^x\\).
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def expand_dims(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.expand_dims(*args, **kwargs)
It accepts the same arguments as tensorflow.expand_dims
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.expand_dims(x1, *args, **kwargs)
is equivalent to
builder.expand_dims(*args, **kwargs)(x1)
tensorflow.expand_dims
Inserts a dimension of 1 into a tensor's shape.
Given a tensor input
, this operation inserts a dimension of 1 at the
dimension index dim
of input
's shape. The dimension index dim
starts at
zero; if you specify a negative number for dim
it is counted backward from
the end.
This operation is useful if you want to add a batch dimension to a single
element. For example, if you have a single image of shape [height, width,
channels]
, you can make it a batch of 1 image with expand_dims(image, 0)
,
which will make the shape [1, height, width, channels]
.
Other examples:
```prettyprint
't' is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1]
't2' is a tensor of shape [2, 3, 5]
shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] ```
This operation requires that:
-1-input.dims() <= dim <= input.dims()
This operation is related to squeeze()
, which removes dimensions of
size 1.
Args:
input: A Tensor
.
dim: A Tensor
. Must be one of the following types: int32
, int64
.
0-D (scalar). Specifies the dimension index at which to
expand the shape of input
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Contains the same data as input
, but its shape has an additional
dimension of size 1 added.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def extract_image_patches(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.extract_image_patches(*args, **kwargs)
It accepts the same arguments as tensorflow.extract_image_patches
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.extract_image_patches(x1, *args, **kwargs)
is equivalent to
builder.extract_image_patches(*args, **kwargs)(x1)
tensorflow.extract_image_patches
Extract `patches` from `images` and put them in the "depth" output dimension.
Args:
images: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
4-D Tensor with shape [batch, in_rows, in_cols, depth]
.
ksizes: A list of ints
that has length >= 4
.
The size of the sliding window for each dimension of images
.
strides: A list of ints
that has length >= 4
.
1-D of length 4. How far the centers of two consecutive patches are in
the images. Must be: [1, stride_rows, stride_cols, 1]
.
rates: A list of ints
that has length >= 4
.
1-D of length 4. Must be: [1, rate_rows, rate_cols, 1]
. This is the
input stride, specifying how far two consecutive patch samples are in the
input. Equivalent to extracting patches with
patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1), followed by
subsampling them spatially by a factor of
rates.
padding: A
stringfrom:
"SAME", "VALID"`.
The type of padding algorithm to use.
We specify the size-related attributes as: ksizes = [1, ksize_rows, ksize_cols, 1] strides = [1, strides_rows, strides_cols, 1] rates = [1, rates_rows, rates_cols, 1]
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as images
.
4-D Tensor with shape [batch, out_rows, out_cols, ksize_rows *
ksize_cols * depth]
containing image patches with size
ksize_rows x ksize_cols x depth
vectorized in the "depth" dimension.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fft(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fft(*args, **kwargs)
It accepts the same arguments as tensorflow.fft
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.fft(x1, *args, **kwargs)
is equivalent to
builder.fft(*args, **kwargs)(x1)
tensorflow.fft
Compute the 1-dimensional discrete Fourier Transform over the inner-most
dimension of input
.
Args:
input: A Tensor
of type complex64
. A complex64 tensor.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most
dimension of input
is replaced with its 1D Fourier Transform.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fft2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fft2d(*args, **kwargs)
It accepts the same arguments as tensorflow.fft2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.fft2d(x1, *args, **kwargs)
is equivalent to
builder.fft2d(*args, **kwargs)(x1)
tensorflow.fft2d
Compute the 2-dimensional discrete Fourier Transform over the inner-most
2 dimensions of input
.
Args:
input: A Tensor
of type complex64
. A complex64 tensor.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 2
dimensions of input
are replaced with their 2D Fourier Transform.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fft3d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fft3d(*args, **kwargs)
It accepts the same arguments as tensorflow.fft3d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.fft3d(x1, *args, **kwargs)
is equivalent to
builder.fft3d(*args, **kwargs)(x1)
tensorflow.fft3d
Compute the 3-dimensional discrete Fourier Transform over the inner-most 3
dimensions of input
.
Args:
input: A Tensor
of type complex64
. A complex64 tensor.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 3
dimensions of input
are replaced with their 3D Fourier Transform.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fill(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fill(*args, **kwargs)
It accepts the same arguments as tensorflow.fill
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.fill(x1, *args, **kwargs)
is equivalent to
builder.fill(*args, **kwargs)(x1)
tensorflow.fill
Creates a tensor filled with a scalar value.
This operation creates a tensor of shape dims
and fills it with value
.
For example:
```prettyprint
Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9] [9, 9, 9]] ```
Args:
dims: A Tensor
of type int32
.
1-D. Represents the shape of the output tensor.
value: A Tensor
. 0-D (scalar). Value to fill the returned tensor.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as value
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fixed_size_partitioner(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fixed_size_partitioner(*args, **kwargs)
It accepts the same arguments as tensorflow.fixed_size_partitioner
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.fixed_size_partitioner(x1, *args, **kwargs)
is equivalent to
builder.fixed_size_partitioner(*args, **kwargs)(x1)
tensorflow.fixed_size_partitioner
Partitioner to specify a fixed number of shards along given axis.
Args:
num_shards: int
, number of shards to partition variable.
axis: int
, axis to partition on.
Returns:
A partition function usable as the partitioner
argument to
variable_scope
, get_variable
, and get_partitioned_variable_list
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fixed_unigram_candidate_sampler(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fixed_unigram_candidate_sampler(*args, **kwargs)
It accepts the same arguments as tf.nn.fixed_unigram_candidate_sampler
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.fixed_unigram_candidate_sampler(x1, *args, **kwargs)
is equivalent to
builder.fixed_unigram_candidate_sampler(*args, **kwargs)(x1)
tf.nn.fixed_unigram_candidate_sampler
Samples a set of classes using the provided (fixed) base distribution.
This operation randomly samples a tensor of sampled classes
(sampled_candidates
) from the range of integers [0, range_max)
.
The elements of sampled_candidates
are drawn without replacement
(if unique=True
) or with replacement (if unique=False
) from
the base distribution.
The base distribution is read from a file or passed in as an in-memory array. There is also an option to skew the distribution by applying a distortion power to the weights.
In addition, this operation returns tensors true_expected_count
and sampled_expected_count
representing the number of times each
of the target classes (true_classes
) and the sampled
classes (sampled_candidates
) is expected to occur in an average
tensor of sampled classes. These values correspond to Q(y|x)
defined in this
document.
If unique=True
, then these are post-rejection probabilities and we
compute them approximately.
Args:
true_classes: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
num_true: An int
. The number of target classes per training example.
num_sampled: An int
. The number of classes to randomly sample per batch.
unique: A bool
. Determines whether all sampled classes in a batch are
unique.
range_max: An int
. The number of possible classes.
vocab_file: Each valid line in this file (which should have a CSV-like
format) corresponds to a valid word ID. IDs are in sequential order,
starting from num_reserved_ids. The last entry in each line is expected
to be a value corresponding to the count or relative probability. Exactly
one of vocab_file
and unigrams
needs to be passed to this operation.
distortion: The distortion is used to skew the unigram probability
distribution. Each weight is first raised to the distortion's power
before adding to the internal unigram distribution. As a result,
distortion = 1.0
gives regular unigram sampling (as defined by the vocab
file), and distortion = 0.0
gives a uniform distribution.
num_reserved_ids: Optionally some reserved IDs can be added in the range
[0, num_reserved_ids]
by the users. One use case is that a special
unknown word token is used as ID 0. These IDs will have a sampling
probability of 0.
num_shards: A sampler can be used to sample from a subset of the original
range in order to speed up the whole computation through parallelism. This
parameter (together with shard
) indicates the number of partitions that
are being used in the overall computation.
shard: A sampler can be used to sample from a subset of the original range
in order to speed up the whole computation through parallelism. This
parameter (together with num_shards
) indicates the particular partition
number of the operation, when partitioning is being used.
unigrams: A list of unigram counts or probabilities, one per ID in
sequential order. Exactly one of vocab_file
and unigrams
should be
passed to this operation.
seed: An int
. An operation-specific seed. Default is 0.
name: A name for the operation (optional).
Returns:
sampled_candidates: A tensor of type int64
and shape [num_sampled]
.
The sampled classes.
true_expected_count: A tensor of type float
. Same shape as
true_classes
. The expected counts under the sampling distribution
of each of true_classes
.
sampled_expected_count: A tensor of type float
. Same shape as
sampled_candidates
. The expected counts under the sampling distribution
of each of sampled_candidates
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fixed_unigram_candidate_sampler_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fixed_unigram_candidate_sampler_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.fixed_unigram_candidate_sampler_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fixed_unigram_candidate_sampler`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fixed_unigram_candidate_sampler_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fixed_unigram_candidate_sampler_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.fixed_unigram_candidate_sampler_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fixed_unigram_candidate_sampler`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def flatten(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.flatten(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.flatten
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.flatten(x1, *args, **kwargs)
is equivalent to
builder.flatten(*args, **kwargs)(x1)
tf.contrib.layers.flatten
Flattens the input while maintaining the batch_size.
Assumes that the first dimension represents the batch.
Args: inputs: a tensor of size [batch_size, ...]. outputs_collections: collection to add the outputs. scope: Optional scope for name_scope.
Returns: a flattened tensor with shape [batch_size, k]. Raises: ValueError: if inputs.shape is wrong.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def floor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.floor(*args, **kwargs)
It accepts the same arguments as tensorflow.floor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.floor(x1, *args, **kwargs)
is equivalent to
builder.floor(*args, **kwargs)(x1)
tensorflow.floor
Returns element-wise largest integer not greater than x.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def floordiv(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.floordiv(*args, **kwargs)
It accepts the same arguments as tensorflow.floordiv
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.floordiv(x1, *args, **kwargs)
is equivalent to
builder.floordiv(*args, **kwargs)(x1)
tensorflow.floordiv
Divides `x / y` elementwise, rounding down for floating point.
The same as tf.div(x,y)
for integers, but uses tf.floor(tf.div(x,y))
for
floating point arguments so that the result is always an integer (though
possibly an integer represented as floating point). This op is generated by
x // y
floor division in Python 3 and in Python 2.7 with
from __future__ import division
.
Note that for efficiency, floordiv
uses C semantics for negative numbers
(unlike Python and Numpy).
x
and y
must have the same type, and the result will have the same type
as well.
Args:
x: Tensor
numerator of real numeric type.
y: Tensor
denominator of real numeric type.
name: A name for the operation (optional).
Returns:
x / y
rounded down (except possibly towards zero for negative integers).
Raises: TypeError: If the inputs are complex.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def foldl(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.foldl(*args, **kwargs)
It accepts the same arguments as tensorflow.foldl
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.foldl(x1, *args, **kwargs)
is equivalent to
builder.foldl(*args, **kwargs)(x1)
tensorflow.foldl
foldl on the list of tensors unpacked from `elems` on dimension 0.
This foldl operator repeatedly applies the callable fn
to a sequence
of elements from first to last. The elements are made of the tensors
unpacked from elems
on dimension 0. The callable fn takes two tensors as
arguments. The first argument is the accumulated value computed from the
preceding invocation of fn. If initializer
is None, elems
must contain
at least one element, and its first element is used as the initializer.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is fn(initializer, values[0]).shape`.
Args: fn: The callable to be performed. elems: A tensor to be unpacked on dimension 0. initializer: (optional) The initial value for the accumulator. parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables support for back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. name: (optional) Name prefix for the returned tensors.
Returns:
A tensor resulting from applying fn
consecutively to the list of tensors
unpacked from elems
, from first to last.
Raises:
TypeError: if fn
is not callable.
Example:
python
elems = [1, 2, 3, 4, 5, 6]
sum = foldl(lambda a, x: a + x, elems)
# sum == 21
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def foldr(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.foldr(*args, **kwargs)
It accepts the same arguments as tensorflow.foldr
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.foldr(x1, *args, **kwargs)
is equivalent to
builder.foldr(*args, **kwargs)(x1)
tensorflow.foldr
foldr on the list of tensors unpacked from `elems` on dimension 0.
This foldr operator repeatedly applies the callable fn
to a sequence
of elements from last to first. The elements are made of the tensors
unpacked from elems
. The callable fn takes two tensors as arguments.
The first argument is the accumulated value computed from the preceding
invocation of fn. If initializer
is None, elems
must contain at least
one element, and its first element is used as the initializer.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is fn(initializer, values[0]).shape
.
Args:
fn: The callable to be performed.
elems: A tensor that is unpacked into a sequence of tensors to apply fn
.
initializer: (optional) The initial value for the accumulator.
parallel_iterations: (optional) The number of iterations allowed to run
in parallel.
back_prop: (optional) True enables support for back propagation.
swap_memory: (optional) True enables GPU-CPU memory swapping.
name: (optional) Name prefix for the returned tensors.
Returns:
A tensor resulting from applying fn
consecutively to the list of tensors
unpacked from elems
, from last to first.
Raises:
TypeError: if fn
is not callable.
Example:
python
elems = [1, 2, 3, 4, 5, 6]
sum = foldr(lambda a, x: a + x, elems)
# sum == 21
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fractional_avg_pool(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fractional_avg_pool(*args, **kwargs)
It accepts the same arguments as tf.nn.fractional_avg_pool
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.fractional_avg_pool(x1, *args, **kwargs)
is equivalent to
builder.fractional_avg_pool(*args, **kwargs)(x1)
tf.nn.fractional_avg_pool
Performs fractional average pooling on the input.
Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.
Args:
value: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
.
4-D with shape [batch, height, width, channels]
.
pooling_ratio: A list of floats
that has length >= 4
.
Pooling ratio for each dimension of value
, currently only
supports row and col dimension and should be >= 1.0. For example, a valid
pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements
must be 1.0 because we don't allow pooling on batch and channels
dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions
respectively.
pseudo_random: An optional bool
. Defaults to False
.
When set to True, generates the pooling sequence in a
pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for
difference between pseudorandom and random.
overlapping: An optional bool
. Defaults to False
.
When set to True, it means when pooling, the values at the boundary
of adjacent pooling cells are used by both cells. For example:
`index 0 1 2 3 4` `value 20 5 16 3 7` If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [41/3, 26/3] for fractional avg pooling.
deterministic: An optional bool
. Defaults to False
.
When set to True, a fixed pooling region will be used when
iterating over a FractionalAvgPool node in the computation graph. Mainly used
in unit test to make FractionalAvgPool deterministic.
seed: An optional int
. Defaults to 0
.
If either seed or seed2 are set to be non-zero, the random number
generator is seeded by the given seed. Otherwise, it is seeded by a
random seed.
seed2: An optional int
. Defaults to 0
.
An second seed to avoid seed collision.
name: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (output, row_pooling_sequence, col_pooling_sequence).
output: A Tensor
. Has the same type as value
. output tensor after fractional avg pooling.
row_pooling_sequence: A Tensor
of type int64
. row pooling sequence, needed to calculate gradient.
col_pooling_sequence: A Tensor
of type int64
. column pooling sequence, needed to calculate gradient.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fractional_avg_pool_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fractional_avg_pool_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.fractional_avg_pool_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fractional_avg_pool`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fractional_avg_pool_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fractional_avg_pool_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.fractional_avg_pool_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fractional_avg_pool`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fractional_max_pool(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fractional_max_pool(*args, **kwargs)
It accepts the same arguments as tf.nn.fractional_max_pool
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.fractional_max_pool(x1, *args, **kwargs)
is equivalent to
builder.fractional_max_pool(*args, **kwargs)(x1)
tf.nn.fractional_max_pool
Performs fractional max pooling on the input.
Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.
The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.
First we define the following:
- input_row_length : the number of rows from the input set
- output_row_length : which will be smaller than the input
- alpha = input_row_length / output_row_length : our reduction ratio
- K = floor(alpha)
- row_pooling_sequence : this is the result list of pool boundary rows
Then, row_pooling_sequence should satisfy:
- a[0] = 0 : the first value of the sequence is 0
- a[end] = input_row_length : the last value of the sequence is the size
- K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
- length(row_pooling_sequence) = output_row_length+1
For more details on fractional max pooling, see this paper: [Benjamin Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071)
Args:
value: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
.
4-D with shape [batch, height, width, channels]
.
pooling_ratio: A list of floats
that has length >= 4
.
Pooling ratio for each dimension of value
, currently only
supports row and col dimension and should be >= 1.0. For example, a valid
pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements
must be 1.0 because we don't allow pooling on batch and channels
dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions
respectively.
pseudo_random: An optional bool
. Defaults to False
.
When set to True, generates the pooling sequence in a
pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for
difference between pseudorandom and random.
overlapping: An optional bool
. Defaults to False
.
When set to True, it means when pooling, the values at the boundary
of adjacent pooling cells are used by both cells. For example:
`index 0 1 2 3 4` `value 20 5 16 3 7` If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling.
deterministic: An optional bool
. Defaults to False
.
When set to True, a fixed pooling region will be used when
iterating over a FractionalMaxPool node in the computation graph. Mainly used
in unit test to make FractionalMaxPool deterministic.
seed: An optional int
. Defaults to 0
.
If either seed or seed2 are set to be non-zero, the random number
generator is seeded by the given seed. Otherwise, it is seeded by a
random seed.
seed2: An optional int
. Defaults to 0
.
An second seed to avoid seed collision.
name: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (output, row_pooling_sequence, col_pooling_sequence).
output: A Tensor
. Has the same type as value
. output tensor after fractional max pooling.
row_pooling_sequence: A Tensor
of type int64
. row pooling sequence, needed to calculate gradient.
col_pooling_sequence: A Tensor
of type int64
. column pooling sequence, needed to calculate gradient.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fractional_max_pool_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fractional_max_pool_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.fractional_max_pool_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fractional_max_pool`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fractional_max_pool_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fractional_max_pool_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.fractional_max_pool_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fractional_max_pool`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fused_resize_and_pad_conv2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fused_resize_and_pad_conv2d(*args, **kwargs)
It accepts the same arguments as tf.nn.fused_resize_and_pad_conv2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.fused_resize_and_pad_conv2d(x1, *args, **kwargs)
is equivalent to
builder.fused_resize_and_pad_conv2d(*args, **kwargs)(x1)
tf.nn.fused_resize_and_pad_conv2d
Performs a resize and padding as a preprocess during a convolution.
It's often possible to do spatial transformations more efficiently as part of the packing stage of a convolution, so this op allows for an optimized implementation where these stages are fused together. This prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and defaults to 'NHWC' order. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.
Args:
input: A Tensor
. Must be one of the following types: half
, float32
, float64
.
4-D with shape [batch, in_height, in_width, in_channels]
.
size: A Tensor
of type int32
.
A 1-D int32 Tensor of 2 elements: new_height, new_width
. The
new size for the images.
paddings: A Tensor
of type int32
.
A two-column matrix specifying the padding sizes. The number of
rows must be the same as the rank of input
.
filter: A Tensor
. Must have the same type as input
. 4-D with shape
[filter_height, filter_width, in_channels, out_channels]
.
mode: A string
from: "REFLECT", "SYMMETRIC"
.
strides: A list of ints
.
1-D of length 4. The stride of the sliding window for each dimension
of input
. Must be in the same order as the dimension specified with format.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
resize_align_corners: An optional bool
. Defaults to False
.
If true, rescale input by (new_height - 1) / (height - 1),
which exactly aligns the 4 corners of images and resized images. If false, rescale
by new_height / height. Treat similarly the width dimension.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fused_resize_and_pad_conv2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fused_resize_and_pad_conv2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.fused_resize_and_pad_conv2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fused_resize_and_pad_conv2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def fused_resize_and_pad_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.fused_resize_and_pad_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.fused_resize_and_pad_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.fused_resize_and_pad_conv2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def gather(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.gather(*args, **kwargs)
It accepts the same arguments as tensorflow.gather
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.gather(x1, *args, **kwargs)
is equivalent to
builder.gather(*args, **kwargs)(x1)
tensorflow.gather
Gather slices from `params` according to `indices`.
indices
must be an integer tensor of any dimension (usually 0-D or 1-D).
Produces an output tensor with shape indices.shape + params.shape[1:]
where:
# Scalar indices output[:, ..., :] = params[indices, :, ... :] # Vector indices output[i, :, ..., :] = params[indices[i], :, ... :] # Higher rank indices output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
If indices
is a permutation and len(indices) == params.shape[0]
then
this operation will permute params
accordingly.
Args:
params: A Tensor
.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
validate_indices: An optional bool
. Defaults to True
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as params
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def gather_nd(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.gather_nd(*args, **kwargs)
It accepts the same arguments as tensorflow.gather_nd
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.gather_nd(x1, *args, **kwargs)
is equivalent to
builder.gather_nd(*args, **kwargs)(x1)
tensorflow.gather_nd
Gather values or slices from `params` according to `indices`.
params
is a Tensor of rank R
and indices
is a Tensor of rank M
.
indices
must be integer tensor, containing indices into params
.
It must be shape [d_0, ..., d_N, R]
where 0 < R <= M
.
The innermost dimension of indices
(with length R
) corresponds to
indices into elements (if R = M
) or slices (if R < M
) along the N
th
dimension of params
.
Produces an output tensor with shape
[d_0, ..., d_{n-1}, params.shape[R], ..., params.shape[M-1]].
Some examples below.
Simple indexing into a matrix:
indices = [[0, 0], [1, 1]] params = [['a', 'b'], ['c', 'd']] output = ['a', 'd']
Slice indexing into a matrix:
indices = [[1], [0]] params = [['a', 'b'], ['c', 'd']] output = [['c', 'd'], ['a', 'b']]
Indexing into a 3-tensor:
indices = [[1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['a1', 'b1'], ['c1', 'd1']]] indices = [[0, 1], [1, 0]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0', 'd0'], ['a1', 'b1']] indices = [[0, 0, 1], [1, 0, 1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = ['b0', 'b1']
Batched indexing into a matrix:
indices = [[[0, 0]], [[0, 1]]] params = [['a', 'b'], ['c', 'd']] output = [['a'], ['b']]
Batched slice indexing into a matrix:
indices = [[[1]], [[0]]] params = [['a', 'b'], ['c', 'd']] output = [[['c', 'd']], [['a', 'b']]]
Batched indexing into a 3-tensor:
indices = [[[1]], [[0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[[['a1', 'b1'], ['c1', 'd1']]], [[['a0', 'b0'], ['c0', 'd0']]]] indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['c0', 'd0'], ['a1', 'b1']], [['a0', 'b0'], ['c1', 'd1']]] indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['b0', 'b1'], ['d0', 'c1']]
Args:
params: A Tensor
. M-D
. The tensor from which to gather values.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
(N+1)-D
. Index tensor having shape [d_0, ..., d_N, R]
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as params
.
(N+M-R)-D
. Values from params
gathered from indices given by
indices
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_collection(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_collection(*args, **kwargs)
It accepts the same arguments as tensorflow.get_collection
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_collection(x1, *args, **kwargs)
is equivalent to
builder.get_collection(*args, **kwargs)(x1)
tensorflow.get_collection
Wrapper for `Graph.get_collection()` using the default graph.
See Graph.get_collection()
for more details.
Args:
key: The key for the collection. For example, the GraphKeys
class
contains many standard names for collections.
scope: (Optional.) If supplied, the resulting list is filtered to include
only items whose name
attribute matches using re.match
. Items
without a name
attribute are never returned if a scope is supplied and
the choice or re.match
means that a scope
without special tokens
filters by prefix.
Returns:
The list of values in the collection with the given name
, or
an empty list if no value has been added to that collection. The
list contains the values in the order under which they were
collected.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_collection_ref(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_collection_ref(*args, **kwargs)
It accepts the same arguments as tensorflow.get_collection_ref
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_collection_ref(x1, *args, **kwargs)
is equivalent to
builder.get_collection_ref(*args, **kwargs)(x1)
tensorflow.get_collection_ref
Wrapper for `Graph.get_collection_ref()` using the default graph.
See Graph.get_collection_ref()
for more details.
Args:
key: The key for the collection. For example, the GraphKeys
class
contains many standard names for collections.
Returns:
The list of values in the collection with the given name
, or an empty
list if no value has been added to that collection. Note that this returns
the collection list itself, which can be modified in place to change the
collection.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_default_graph(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_default_graph(*args, **kwargs)
It accepts the same arguments as tensorflow.get_default_graph
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_default_graph(x1, *args, **kwargs)
is equivalent to
builder.get_default_graph(*args, **kwargs)(x1)
tensorflow.get_default_graph
Returns the default graph for the current thread.
The returned graph will be the innermost graph on which a
Graph.as_default()
context has been entered, or a global default
graph if none has been explicitly created.
NOTE: The default graph is a property of the current thread. If you
create a new thread, and wish to use the default graph in that
thread, you must explicitly add a with g.as_default():
in that
thread's function.
Returns:
The default Graph
being used in the current thread.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_default_session(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_default_session(*args, **kwargs)
It accepts the same arguments as tensorflow.get_default_session
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_default_session(x1, *args, **kwargs)
is equivalent to
builder.get_default_session(*args, **kwargs)(x1)
tensorflow.get_default_session
Returns the default session for the current thread.
The returned Session
will be the innermost session on which a
Session
or Session.as_default()
context has been entered.
NOTE: The default session is a property of the current thread. If you
create a new thread, and wish to use the default session in that
thread, you must explicitly add a with sess.as_default():
in that
thread's function.
Returns:
The default Session
being used in the current thread.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_seed(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_seed(*args, **kwargs)
It accepts the same arguments as tensorflow.get_seed
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_seed(x1, *args, **kwargs)
is equivalent to
builder.get_seed(*args, **kwargs)(x1)
tensorflow.get_seed
Returns the local seeds an operation should use given an op-specific seed.
Given operation-specific seed, op_seed
, this helper function returns two
seeds derived from graph-level and op-level seeds. Many random operations
internally use the two seeds to allow user to change the seed globally for a
graph, or for only specific operations.
For details on how the graph-level seed interacts with op seeds, see
set_random_seed
.
Args: op_seed: integer.
Returns: A tuple of two integers that should be used for the local seed of this operation.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_session_handle(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_session_handle(*args, **kwargs)
It accepts the same arguments as tensorflow.get_session_handle
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_session_handle(x1, *args, **kwargs)
is equivalent to
builder.get_session_handle(*args, **kwargs)(x1)
tensorflow.get_session_handle
Return the handle of `data`.
This is EXPERIMENTAL and subject to change.
Keep data
"in-place" in the runtime and create a handle that can be
used to retrieve data
in a subsequent run().
Combined with get_session_tensor
, we can keep a tensor produced in
one run call in place, and use it as the input in a future run call.
Args: data: A tensor to be stored in the session. name: Optional name prefix for the return tensor.
Returns:
A scalar string tensor representing a unique handle for data
.
Raises:
TypeError: if data
is not a Tensor.
Example:
```python c = tf.mul(a, b) h = tf.get_session_handle(c) h = sess.run(h)
p, a = tf.get_session_tensor(h.handle, tf.float32) b = tf.mul(a, 10) c = sess.run(b, feed_dict={p: h.handle}) ```
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_session_tensor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_session_tensor(*args, **kwargs)
It accepts the same arguments as tensorflow.get_session_tensor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_session_tensor(x1, *args, **kwargs)
is equivalent to
builder.get_session_tensor(*args, **kwargs)(x1)
tensorflow.get_session_tensor
Get the tensor of type `dtype` by feeding a tensor handle.
This is EXPERIMENTAL and subject to change.
Get the value of the tensor from a tensor handle. The tensor is produced in a previous run() and stored in the state of the session.
Args: handle: The string representation of a persistent tensor handle. dtype: The type of the output tensor. name: Optional name prefix for the return tensor.
Returns: A pair of tensors. The first is a placeholder for feeding a tensor handle and the second is the tensor in the session state keyed by the tensor handle.
Example:
```python c = tf.mul(a, b) h = tf.get_session_handle(c) h = sess.run(h)
p, a = tf.get_session_tensor(h.handle, tf.float32) b = tf.mul(a, 10) c = sess.run(b, feed_dict={p: h.handle}) ```
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_variable(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_variable(*args, **kwargs)
It accepts the same arguments as tensorflow.get_variable
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.get_variable(x1, *args, **kwargs)
is equivalent to
builder.get_variable(*args, **kwargs)(x1)
tensorflow.get_variable
Gets an existing variable with these parameters or create a new one.
This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example:
python
with tf.variable_scope("foo"):
v = tf.get_variable("v", [1]) # v.name == "foo/v:0"
w = tf.get_variable("w", [1]) # w.name == "foo/w:0"
with tf.variable_scope("foo", reuse=True)
v1 = tf.get_variable("v") # The same as v above.
If initializer is None
(the default), the default initializer passed in
the variable scope will be used. If that one is None
too, a
uniform_unit_scaling_initializer
will be used. The initializer can also be
a Tensor, in which case the variable is initialized to this value and shape.
Similarly, if the regularizer is None
(the default), the default regularizer
passed in the variable scope will be used (if that is None
too,
then by default no regularization is performed).
If a partitioner is provided, first a sharded Variable
is created
via _get_partitioned_variable
, and the return value is a
Tensor
composed of the shards concatenated along the partition axis.
Some useful partitioners are available. See, e.g.,
variable_axis_size_partitioner
and min_max_variable_partitioner
.
Args:
name: The name of the new or existing variable.
shape: Shape of the new or existing variable.
dtype: Type of the new or existing variable (defaults to DT_FLOAT
).
initializer: Initializer for the variable if one is created.
regularizer: A (Tensor -> Tensor or None) function; the result of
applying it on a newly created variable will be added to the collection
GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
trainable: If True
also add the variable to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
collections: List of graph collections keys to add the Variable to.
Defaults to [GraphKeys.VARIABLES]
(see tf.Variable).
caching_device: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device. If not None
, caches on another device. Typical use is to
cache on the device where the Ops using the Variable reside, to
deduplicate copying through Switch
and other conditional statements.
partitioner: Optional callable that accepts a fully defined TensorShape
and dtype
of the Variable to be created, and returns a list of
partitions for each axis (currently only one axis can be partitioned).
validate_shape: If False, allows the variable to be initialized with a
value of unknown shape. If True, the default, the shape of initial_value
must be known.
custom_getter: Callable that takes as a first argument the true getter, and
allows overwriting the internal get_variable method.
The signature of custom_getter
should match that of this method,
but the most future-proof version will allow for changes:
def custom_getter(getter, *args, **kwargs)
. Direct access to
all get_variable
parameters is also allowed:
def custom_getter(getter, name, *args, **kwargs)
. A simple identity
custom getter that simply creates variables with modified names is:
python
def custom_getter(getter, name, *args, **kwargs):
return getter(name + '_suffix', *args, **kwargs)
Returns: The created or existing variable.
Raises:
ValueError: when creating a new variable and shape is not declared,
when violating reuse during variable creation, or when initializer
dtype
and dtype
don't match. Reuse is set inside variable_scope
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def get_variable_scope(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.get_variable_scope(*args, **kwargs)
It accepts the same arguments as tensorflow.get_variable_scope
.
However, a partial with the arguments is returned which expects any argument x
and complete ignores it, such that
tensorflow.get_variable_scope(*args, **kwargs)
is equivalent to
builder.get_variable_scope(*args, **kwargs)(x)
tensorflow.get_variable_scope
Returns the current variable scope.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs)
def global_norm(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.global_norm(*args, **kwargs)
It accepts the same arguments as tensorflow.global_norm
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.global_norm(x1, *args, **kwargs)
is equivalent to
builder.global_norm(*args, **kwargs)(x1)
tensorflow.global_norm
Computes the global norm of multiple tensors.
Given a tuple or list of tensors t_list
, this operation returns the
global norm of the elements in all tensors in t_list
. The global norm is
computed as:
global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))
Any entries in t_list
that are of type None are ignored.
Args:
t_list: A tuple or list of mixed Tensors
, IndexedSlices
, or None.
name: A name for the operation (optional).
Returns:
A 0-D (scalar) Tensor
of type float
.
Raises:
TypeError: If t_list
is not a sequence.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def gradients(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.gradients(*args, **kwargs)
It accepts the same arguments as tensorflow.gradients
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.gradients(x1, *args, **kwargs)
is equivalent to
builder.gradients(*args, **kwargs)(x1)
tensorflow.gradients
Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`.
ys
and xs
are each a Tensor
or a list of tensors. grad_ys
is a list of Tensor
, holding the gradients received by the
ys
. The list must be the same length as ys
.
gradients()
adds ops to the graph to output the partial
derivatives of ys
with respect to xs
. It returns a list of
Tensor
of length len(xs)
where each tensor is the sum(dy/dx)
for y in ys
.
grad_ys
is a list of tensors of the same length as ys
that holds
the initial gradients for each y in ys
. When grad_ys
is None,
we fill in a tensor of '1's of the shape of y for each y in ys
. A
user can provide their own initial grad_ys
to compute the
derivatives using a different initial gradient for each y (e.g., if
one wanted to weight the gradient differently for each value in
each y).
Args:
ys: A Tensor
or list of tensors to be differentiated.
xs: A Tensor
or list of tensors to be used for differentiation.
grad_ys: Optional. A Tensor
or list of tensors the same size as
ys
and holding the gradients computed for each y in ys
.
name: Optional name to use for grouping all the gradient ops together.
defaults to 'gradients'.
colocate_gradients_with_ops: If True, try colocating gradients with
the corresponding op.
gate_gradients: If True, add a tuple around the gradients returned
for an operations. This avoids some race conditions.
aggregation_method: Specifies the method used to combine gradient terms.
Accepted values are constants defined in the class AggregationMethod
.
Returns:
A list of sum(dy/dx)
for each x in xs
.
Raises:
LookupError: if one of the operations between x
and y
does not
have a registered gradient function.
ValueError: if the arguments are invalid.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def greater(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.greater(*args, **kwargs)
It accepts the same arguments as tensorflow.greater
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.greater(x1, *args, **kwargs)
is equivalent to
builder.greater(*args, **kwargs)(x1)
tensorflow.greater
Returns the truth value of (x > y) element-wise.
NOTE: Greater
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def greater_equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.greater_equal(*args, **kwargs)
It accepts the same arguments as tensorflow.greater_equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.greater_equal(x1, *args, **kwargs)
is equivalent to
builder.greater_equal(*args, **kwargs)(x1)
tensorflow.greater_equal
Returns the truth value of (x >= y) element-wise.
NOTE: GreaterEqual
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def group(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.group(*args, **kwargs)
It accepts the same arguments as tensorflow.group
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.group(x1, *args, **kwargs)
is equivalent to
builder.group(*args, **kwargs)(x1)
tensorflow.group
Create an op that groups multiple operations.
When this op finishes, all ops in input
have finished. This op has no
output.
See also tuple
and with_dependencies
.
Args: inputs: Zero or more tensors to group. *kwargs: Optional parameters to pass when constructing the NodeDef. name: A name for this operation (optional).
Returns: An Operation that executes all its inputs.
Raises: ValueError: If an unknown keyword argument is provided.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def histogram_fixed_width(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.histogram_fixed_width(*args, **kwargs)
It accepts the same arguments as tensorflow.histogram_fixed_width
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.histogram_fixed_width(x1, *args, **kwargs)
is equivalent to
builder.histogram_fixed_width(*args, **kwargs)(x1)
tensorflow.histogram_fixed_width
Return histogram of values.
Given the tensor values
, this operation returns a rank 1 histogram counting
the number of entries in values
that fell into every bin. The bins are
equal width and determined by the arguments value_range
and nbins
.
Args:
values: Numeric Tensor
.
value_range: Shape [2] Tensor
. new_values <= value_range[0] will be
mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
Must be same dtype as new_values.
nbins: Scalar int32 Tensor
. Number of histogram bins.
dtype: dtype for returned histogram.
name: A name for this operation (defaults to 'histogram_fixed_width').
Returns:
A 1-D Tensor
holding histogram of values.
Examples:
```python
Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)
nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]
with tf.default_session() as sess: hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) variables.initialize_all_variables().run() sess.run(hist) => [2, 1, 1, 0, 2] ```
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def histogram_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.histogram_summary(*args, **kwargs)
It accepts the same arguments as tensorflow.histogram_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tensorflow.histogram_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.histogram_summary(x1, *args, **kwargs)(x2)
tensorflow.histogram_summary
Outputs a `Summary` protocol buffer with a histogram.
The generated
Summary
has one summary value containing a histogram for values
.
This op reports an InvalidArgument
error if any value is not finite.
Args:
tag: A string
Tensor
. 0-D. Tag to use for the summary value.
values: A real numeric Tensor
. Any shape. Values to use to
build the histogram.
collections: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to [GraphKeys.SUMMARIES]
.
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def identity(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.identity(*args, **kwargs)
It accepts the same arguments as tensorflow.identity
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.identity(x1, *args, **kwargs)
is equivalent to
builder.identity(*args, **kwargs)(x1)
tensorflow.identity
Return a tensor with the same shape and contents as the input tensor or value.
Args:
input: A Tensor
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ifft(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ifft(*args, **kwargs)
It accepts the same arguments as tensorflow.ifft
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ifft(x1, *args, **kwargs)
is equivalent to
builder.ifft(*args, **kwargs)(x1)
tensorflow.ifft
Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most
dimension of input
.
Args:
input: A Tensor
of type complex64
. A complex64 tensor.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most
dimension of input
is replaced with its inverse 1D Fourier Transform.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ifft2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ifft2d(*args, **kwargs)
It accepts the same arguments as tensorflow.ifft2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ifft2d(x1, *args, **kwargs)
is equivalent to
builder.ifft2d(*args, **kwargs)(x1)
tensorflow.ifft2d
Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most
2 dimensions of input
.
Args:
input: A Tensor
of type complex64
. A complex64 tensor.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 2
dimensions of input
are replaced with their inverse 2D Fourier Transform.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ifft3d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ifft3d(*args, **kwargs)
It accepts the same arguments as tensorflow.ifft3d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ifft3d(x1, *args, **kwargs)
is equivalent to
builder.ifft3d(*args, **kwargs)(x1)
tensorflow.ifft3d
Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most
3 dimensions of input
.
Args:
input: A Tensor
of type complex64
. A complex64 tensor.
name: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 3
dimensions of input
are replaced with their inverse 3D Fourier Transform.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def igamma(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.igamma(*args, **kwargs)
It accepts the same arguments as tensorflow.igamma
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.igamma(x1, *args, **kwargs)
is equivalent to
builder.igamma(*args, **kwargs)(x1)
tensorflow.igamma
Compute the lower regularized incomplete Gamma function `Q(a, x)`.
The lower regularized incomplete Gamma function is defined as:
P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)
where
gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt
is the lower incomplete Gamma function.
Note, above Q(a, x)
(Igammac
) is the upper regularized complete
Gamma function.
Args:
a: A Tensor
. Must be one of the following types: float32
, float64
.
x: A Tensor
. Must have the same type as a
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def igammac(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.igammac(*args, **kwargs)
It accepts the same arguments as tensorflow.igammac
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.igammac(x1, *args, **kwargs)
is equivalent to
builder.igammac(*args, **kwargs)(x1)
tensorflow.igammac
Compute the upper regularized incomplete Gamma function `Q(a, x)`.
The upper regularized incomplete Gamma function is defined as:
Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)
where
Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt
is the upper incomplete Gama function.
Note, above P(a, x)
(Igamma
) is the lower regularized complete
Gamma function.
Args:
a: A Tensor
. Must be one of the following types: float32
, float64
.
x: A Tensor
. Must have the same type as a
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def imag(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.imag(*args, **kwargs)
It accepts the same arguments as tensorflow.imag
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.imag(x1, *args, **kwargs)
is equivalent to
builder.imag(*args, **kwargs)(x1)
tensorflow.imag
Returns the imaginary part of a complex number.
Given a tensor input
of complex numbers, this operation returns a tensor of
type float32
or float64
that is the imaginary part of each element in
input
. All elements in input
must be complex numbers of the form (a +
bj), where a is the real part and b is the imaginary part returned by
this operation.
For example:
```
tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.imag(input) ==> [4.75, 5.75] ```
Args:
input: A Tensor
. Must be one of the following types: complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
of type float32
or float64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def image_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.image_summary(*args, **kwargs)
It accepts the same arguments as tensorflow.image_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tensorflow.image_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.image_summary(x1, *args, **kwargs)(x2)
tensorflow.image_summary
Outputs a `Summary` protocol buffer with images.
The summary has up to max_images
summary values containing images. The
images are built from tensor
which must be 4-D with shape [batch_size,
height, width, channels]
and where channels
can be:
- 1:
tensor
is interpreted as Grayscale. - 3:
tensor
is interpreted as RGB. - 4:
tensor
is interpreted as RGBA.
The images have the same number of channels as the input tensor. For float
input, the values are normalized one image at a time to fit in the range
[0, 255]
. uint8
values are unchanged. The op uses two different
normalization algorithms:
-
If the input values are all positive, they are rescaled so the largest one is 255.
-
If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.
The tag
argument is a scalar Tensor
of type string
. It is used to
build the tag
of the summary values:
- If
max_images
is 1, the summary value tag is 'tag/image'. - If
max_images
is greater than 1, the summary value tags are generated sequentially as 'tag/image/0', 'tag/image/1', etc.
Args:
tag: A scalar Tensor
of type string
. Used to build the tag
of the summary values.
tensor: A 4-D uint8
or float32
Tensor
of shape [batch_size, height,
width, channels]
where channels
is 1, 3, or 4.
max_images: Max number of batch elements to generate images for.
collections: Optional list of ops.GraphKeys. The collections to add the
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def import_graph_def(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.import_graph_def(*args, **kwargs)
It accepts the same arguments as tensorflow.import_graph_def
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.import_graph_def(x1, *args, **kwargs)
is equivalent to
builder.import_graph_def(*args, **kwargs)(x1)
tensorflow.import_graph_def
Imports the TensorFlow graph in `graph_def` into the Python `Graph`.
This function provides a way to import a serialized TensorFlow
GraphDef
protocol buffer, and extract individual objects in the GraphDef
as
Tensor
and Operation
objects. See
Graph.as_graph_def()
for a way to create a
GraphDef
proto.
Args:
graph_def: A GraphDef
proto containing operations to be imported into
the default graph.
input_map: A dictionary mapping input names (as strings) in graph_def
to Tensor
objects. The values of the named input tensors in the
imported graph will be re-mapped to the respective Tensor
values.
return_elements: A list of strings containing operation names in
graph_def
that will be returned as Operation
objects; and/or
tensor names in graph_def
that will be returned as Tensor
objects.
name: (Optional.) A prefix that will be prepended to the names in
graph_def
. Defaults to "import"
.
op_dict: (Optional.) A dictionary mapping op type names to OpDef
protos.
Must contain an OpDef
proto for each op type named in graph_def
.
If omitted, uses the OpDef
protos registered in the global registry.
producer_op_list: (Optional.) An OpList
proto with the (possibly stripped)
list of OpDef
s used by the producer of the graph. If provided, attrs
for ops in graph_def
that are not in op_dict
that have their default
value according to producer_op_list
will be removed. This will allow
some more GraphDef
s produced by later binaries to be accepted by
earlier binaries.
Returns:
A list of Operation
and/or Tensor
objects from the imported graph,
corresponding to the names in return_elements
.
Raises:
TypeError: If graph_def
is not a GraphDef
proto,
input_map
is not a dictionary mapping strings to Tensor
objects,
or return_elements
is not a list of strings.
ValueError: If input_map
, or return_elements
contains names that
do not appear in graph_def
, or graph_def
is not well-formed (e.g.
it refers to an unknown tensor).
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def in_top_k(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.in_top_k(*args, **kwargs)
It accepts the same arguments as tf.nn.in_top_k
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.in_top_k(x1, *args, **kwargs)
is equivalent to
builder.in_top_k(*args, **kwargs)(x1)
tf.nn.in_top_k
Says whether the targets are in the top `K` predictions.
This outputs a batch_size
bool array, an entry out[i]
is true
if the
prediction for the target class is among the top k
predictions among
all predictions for example i
. Note that the behavior of InTopK
differs
from the TopK
op in its handling of ties; if multiple classes have the
same prediction value and straddle the top-k
boundary, all of those
classes are considered to be in the top k
.
More formally, let
\(predictions_i\) be the predictions for all classes for example i
,
\(targets_i\) be the target class for example i
,
\(out_i\) be the output for example i
,
$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
Args:
predictions: A Tensor
of type float32
.
A batch_size
x classes
tensor.
targets: A Tensor
. Must be one of the following types: int32
, int64
.
A batch_size
vector of class ids.
k: An int
. Number of top elements to look at for computing precision.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
. Computed Precision at k
as a bool Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def in_top_k_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.in_top_k_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.in_top_k_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.in_top_k`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def in_top_k_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.in_top_k_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.in_top_k_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.in_top_k`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def inception_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.inception_layer(*args, **kwargs)
It accepts the same arguments as tb.inception_layer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.inception_layer(x1, *args, **kwargs)
is equivalent to
builder.inception_layer(*args, **kwargs)(x1)
tb.inception_layer
None
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def initialize_all_tables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.initialize_all_tables(*args, **kwargs)
It accepts the same arguments as tensorflow.initialize_all_tables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.initialize_all_tables(x1, *args, **kwargs)
is equivalent to
builder.initialize_all_tables(*args, **kwargs)(x1)
tensorflow.initialize_all_tables
Returns an Op that initializes all tables of the default graph.
Args: name: Optional name for the initialization op.
Returns: An Op that initializes all tables. Note that if there are not tables the returned Op is a NoOp.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def initialize_all_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.initialize_all_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.initialize_all_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.initialize_all_variables(x1, *args, **kwargs)
is equivalent to
builder.initialize_all_variables(*args, **kwargs)(x1)
tensorflow.initialize_all_variables
Returns an Op that initializes all variables.
This is just a shortcut for initialize_variables(all_variables())
Returns: An Op that initializes all variables in the graph.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def initialize_local_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.initialize_local_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.initialize_local_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.initialize_local_variables(x1, *args, **kwargs)
is equivalent to
builder.initialize_local_variables(*args, **kwargs)(x1)
tensorflow.initialize_local_variables
Returns an Op that initializes all local variables.
This is just a shortcut for initialize_variables(local_variables())
Returns: An Op that initializes all local variables in the graph.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def initialize_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.initialize_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.initialize_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.initialize_variables(x1, *args, **kwargs)
is equivalent to
builder.initialize_variables(*args, **kwargs)(x1)
tensorflow.initialize_variables
Returns an Op that initializes a list of variables.
After you launch the graph in a session, you can run the returned Op to
initialize all the variables in var_list
. This Op runs all the
initializers of the variables in var_list
in parallel.
Calling initialize_variables()
is equivalent to passing the list of
initializers to Group()
.
If var_list
is empty, however, the function still returns an Op that can
be run. That Op just has no effect.
Args:
var_list: List of Variable
objects to initialize.
name: Optional name for the returned operation.
Returns: An Op that run the initializers of all the specified variables.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def inv(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.inv(*args, **kwargs)
It accepts the same arguments as tensorflow.inv
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.inv(x1, *args, **kwargs)
is equivalent to
builder.inv(*args, **kwargs)(x1)
tensorflow.inv
Computes the reciprocal of x element-wise.
I.e., \(y = 1 / x\).
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def invert_permutation(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.invert_permutation(*args, **kwargs)
It accepts the same arguments as tensorflow.invert_permutation
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.invert_permutation(x1, *args, **kwargs)
is equivalent to
builder.invert_permutation(*args, **kwargs)(x1)
tensorflow.invert_permutation
Computes the inverse permutation of a tensor.
This operation computes the inverse of an index permutation. It takes a 1-D
integer tensor x
, which represents the indices of a zero-based array, and
swaps each value with its index position. In other words, for an output tensor
y
and an input tensor x
, this operation computes the following:
y[x[i]] = i for i in [0, 1, ..., len(x) - 1]
The values must include 0. There can be no duplicate values or negative values.
For example:
```prettyprint
tensor x
is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1] ```
Args:
x: A Tensor
. Must be one of the following types: int32
, int64
. 1-D.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
. 1-D.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_finite(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_finite(*args, **kwargs)
It accepts the same arguments as tensorflow.is_finite
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_finite(x1, *args, **kwargs)
is equivalent to
builder.is_finite(*args, **kwargs)(x1)
tensorflow.is_finite
Returns which elements of x are finite.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_inf(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_inf(*args, **kwargs)
It accepts the same arguments as tensorflow.is_inf
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_inf(x1, *args, **kwargs)
is equivalent to
builder.is_inf(*args, **kwargs)(x1)
tensorflow.is_inf
Returns which elements of x are Inf.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_nan(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_nan(*args, **kwargs)
It accepts the same arguments as tensorflow.is_nan
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_nan(x1, *args, **kwargs)
is equivalent to
builder.is_nan(*args, **kwargs)(x1)
tensorflow.is_nan
Returns which elements of x are NaN.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_non_decreasing(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_non_decreasing(*args, **kwargs)
It accepts the same arguments as tensorflow.is_non_decreasing
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_non_decreasing(x1, *args, **kwargs)
is equivalent to
builder.is_non_decreasing(*args, **kwargs)(x1)
tensorflow.is_non_decreasing
Returns `True` if `x` is non-decreasing.
Elements of x
are compared in row-major order. The tensor [x[0],...]
is non-decreasing if for every adjacent pair we have x[i] <= x[i+1]
.
If x
has less than two elements, it is trivially non-decreasing.
See also: is_strictly_increasing
Args:
x: Numeric Tensor
.
name: A name for this operation (optional). Defaults to "is_non_decreasing"
Returns:
Boolean Tensor
, equal to True
iff x
is non-decreasing.
Raises:
TypeError: if x
is not a numeric tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_numeric_tensor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_numeric_tensor(*args, **kwargs)
It accepts the same arguments as tensorflow.is_numeric_tensor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_numeric_tensor(x1, *args, **kwargs)
is equivalent to
builder.is_numeric_tensor(*args, **kwargs)(x1)
tensorflow.is_numeric_tensor
None
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_strictly_increasing(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_strictly_increasing(*args, **kwargs)
It accepts the same arguments as tensorflow.is_strictly_increasing
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_strictly_increasing(x1, *args, **kwargs)
is equivalent to
builder.is_strictly_increasing(*args, **kwargs)(x1)
tensorflow.is_strictly_increasing
Returns `True` if `x` is strictly increasing.
Elements of x
are compared in row-major order. The tensor [x[0],...]
is strictly increasing if for every adjacent pair we have x[i] < x[i+1]
.
If x
has less than two elements, it is trivially strictly increasing.
See also: is_non_decreasing
Args:
x: Numeric Tensor
.
name: A name for this operation (optional).
Defaults to "is_strictly_increasing"
Returns:
Boolean Tensor
, equal to True
iff x
is strictly increasing.
Raises:
TypeError: if x
is not a numeric tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def is_variable_initialized(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.is_variable_initialized(*args, **kwargs)
It accepts the same arguments as tensorflow.is_variable_initialized
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.is_variable_initialized(x1, *args, **kwargs)
is equivalent to
builder.is_variable_initialized(*args, **kwargs)(x1)
tensorflow.is_variable_initialized
Tests if a variable has been initialized.
Args:
variable: A Variable
.
Returns:
Returns a scalar boolean Tensor, True
if the variable has been
initialized, False
otherwise.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def l2_loss(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.l2_loss(*args, **kwargs)
It accepts the same arguments as tf.nn.l2_loss
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.l2_loss(x1, *args, **kwargs)
is equivalent to
builder.l2_loss(*args, **kwargs)(x1)
tf.nn.l2_loss
L2 Loss.
Computes half the L2 norm of a tensor without the sqrt
:
output = sum(t ** 2) / 2
Args:
t: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Typically 2-D, but may have any dimensions.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as t
. 0-D.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def l2_loss_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.l2_loss_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.l2_loss_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.l2_loss`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def l2_loss_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.l2_loss_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.l2_loss_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.l2_loss`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def l2_normalize(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.l2_normalize(*args, **kwargs)
It accepts the same arguments as tf.nn.l2_normalize
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.l2_normalize(x1, *args, **kwargs)
is equivalent to
builder.l2_normalize(*args, **kwargs)(x1)
tf.nn.l2_normalize
Normalizes along dimension `dim` using an L2 norm.
For a 1-D tensor with dim = 0
, computes
output = x / sqrt(max(sum(x**2), epsilon))
For x
with more dimensions, independently normalizes each 1-D slice along
dimension dim
.
Args:
x: A Tensor
.
dim: Dimension along which to normalize. A scalar or a vector of
integers.
epsilon: A lower bound value for the norm. Will use sqrt(epsilon)
as the
divisor if norm < sqrt(epsilon)
.
name: A name for this operation (optional).
Returns:
A Tensor
with the same shape as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def l2_normalize_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.l2_normalize_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.l2_normalize_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.l2_normalize`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def l2_normalize_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.l2_normalize_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.l2_normalize_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.l2_normalize`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def lbeta(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.lbeta(*args, **kwargs)
It accepts the same arguments as tensorflow.lbeta
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.lbeta(x1, *args, **kwargs)
is equivalent to
builder.lbeta(*args, **kwargs)(x1)
tensorflow.lbeta
Computes `ln(|Beta(x)|)`, reducing along the last dimension.
Given one-dimensional z = [z_0,...,z_{K-1}]
, we define
Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)
And for n + 1
dimensional x
with shape [N1, ..., Nn, K]
, we define
lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|)
. In other words,
the last dimension is treated as the z
vector.
Note that if z = [u, v]
, then
Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt
, which defines the traditional
bivariate beta function.
Args:
x: A rank n + 1
Tensor
with type float
, or double
.
name: A name for the operation (optional).
Returns:
The logarithm of |Beta(x)|
reducing along the last dimension.
Raises:
ValueError: If x
is empty with rank one or less.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def learned_unigram_candidate_sampler(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.learned_unigram_candidate_sampler(*args, **kwargs)
It accepts the same arguments as tf.nn.learned_unigram_candidate_sampler
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.learned_unigram_candidate_sampler(x1, *args, **kwargs)
is equivalent to
builder.learned_unigram_candidate_sampler(*args, **kwargs)(x1)
tf.nn.learned_unigram_candidate_sampler
Samples a set of classes from a distribution learned during training.
This operation randomly samples a tensor of sampled classes
(sampled_candidates
) from the range of integers [0, range_max)
.
The elements of sampled_candidates
are drawn without replacement
(if unique=True
) or with replacement (if unique=False
) from
the base distribution.
The base distribution for this operation is constructed on the fly
during training. It is a unigram distribution over the target
classes seen so far during training. Every integer in [0, range_max)
begins with a weight of 1, and is incremented by 1 each time it is
seen as a target class. The base distribution is not saved to checkpoints,
so it is reset when the model is reloaded.
In addition, this operation returns tensors true_expected_count
and sampled_expected_count
representing the number of times each
of the target classes (true_classes
) and the sampled
classes (sampled_candidates
) is expected to occur in an average
tensor of sampled classes. These values correspond to Q(y|x)
defined in this
document.
If unique=True
, then these are post-rejection probabilities and we
compute them approximately.
Args:
true_classes: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
num_true: An int
. The number of target classes per training example.
num_sampled: An int
. The number of classes to randomly sample per batch.
unique: A bool
. Determines whether all sampled classes in a batch are
unique.
range_max: An int
. The number of possible classes.
seed: An int
. An operation-specific seed. Default is 0.
name: A name for the operation (optional).
Returns:
sampled_candidates: A tensor of type int64
and shape [num_sampled]
.
The sampled classes.
true_expected_count: A tensor of type float
. Same shape as
true_classes
. The expected counts under the sampling distribution
of each of true_classes
.
sampled_expected_count: A tensor of type float
. Same shape as
sampled_candidates
. The expected counts under the sampling distribution
of each of sampled_candidates
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def learned_unigram_candidate_sampler_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.learned_unigram_candidate_sampler_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.learned_unigram_candidate_sampler_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.learned_unigram_candidate_sampler`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def learned_unigram_candidate_sampler_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.learned_unigram_candidate_sampler_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.learned_unigram_candidate_sampler_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.learned_unigram_candidate_sampler`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def less(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.less(*args, **kwargs)
It accepts the same arguments as tensorflow.less
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.less(x1, *args, **kwargs)
is equivalent to
builder.less(*args, **kwargs)(x1)
tensorflow.less
Returns the truth value of (x < y) element-wise.
NOTE: Less
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def less_equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.less_equal(*args, **kwargs)
It accepts the same arguments as tensorflow.less_equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.less_equal(x1, *args, **kwargs)
is equivalent to
builder.less_equal(*args, **kwargs)(x1)
tensorflow.less_equal
Returns the truth value of (x <= y) element-wise.
NOTE: LessEqual
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def lgamma(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.lgamma(*args, **kwargs)
It accepts the same arguments as tensorflow.lgamma
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.lgamma(x1, *args, **kwargs)
is equivalent to
builder.lgamma(*args, **kwargs)(x1)
tensorflow.lgamma
Computes the log of the absolute value of `Gamma(x)` element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def lin_space(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.lin_space(*args, **kwargs)
It accepts the same arguments as tensorflow.lin_space
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.lin_space(x1, *args, **kwargs)
is equivalent to
builder.lin_space(*args, **kwargs)(x1)
tensorflow.lin_space
Generates values in an interval.
A sequence of num
evenly-spaced values are generated beginning at start
.
If num > 1
, the values in the sequence increase by stop - start / num - 1
,
so that the last one is exactly stop
.
For example:
tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
Args:
start: A Tensor
. Must be one of the following types: float32
, float64
.
First entry in the range.
stop: A Tensor
. Must have the same type as start
.
Last entry in the range.
num: A Tensor
. Must be one of the following types: int32
, int64
.
Number of values to generate.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as start
. 1-D. The generated values.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def linear_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.linear_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.linear_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `None`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def linear_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.linear_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.linear_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `None`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def list_diff(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.list_diff(*args, **kwargs)
It accepts the same arguments as tensorflow.list_diff
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.list_diff(x1, *args, **kwargs)
is equivalent to
builder.list_diff(*args, **kwargs)(x1)
tensorflow.list_diff
Computes the difference between two lists of numbers or strings.
Given a list x
and a list y
, this operation returns a list out
that
represents all values that are in x
but not in y
. The returned list out
is sorted in the same order that the numbers appear in x
(duplicates are
preserved). This operation also returns a list idx
that represents the
position of each out
element in x
. In other words:
out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]
For example, given this input:
prettyprint
x = [1, 2, 3, 4, 5, 6]
y = [1, 3, 5]
This operation would return:
prettyprint
out ==> [2, 4, 6]
idx ==> [1, 3, 5]
Args:
x: A Tensor
. 1-D. Values to keep.
y: A Tensor
. Must have the same type as x
. 1-D. Values to remove.
out_idx: An optional tf.DType
from: tf.int32, tf.int64
. Defaults to tf.int32
.
name: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (out, idx).
out: A Tensor
. Has the same type as x
. 1-D. Values present in x
but not in y
.
idx: A Tensor
of type out_idx
. 1-D. Positions of x
values preserved in out
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def load_file_system_library(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.load_file_system_library(*args, **kwargs)
It accepts the same arguments as tensorflow.load_file_system_library
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.load_file_system_library(x1, *args, **kwargs)
is equivalent to
builder.load_file_system_library(*args, **kwargs)(x1)
tensorflow.load_file_system_library
Loads a TensorFlow plugin, containing file system implementation.
Pass library_filename
to a platform-specific mechanism for dynamically
loading a library. The rules for determining the exact location of the
library are platform-specific and are not documented here.
Args: library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.
Returns: None.
Raises: RuntimeError: when unable to load the library.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def load_op_library(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.load_op_library(*args, **kwargs)
It accepts the same arguments as tensorflow.load_op_library
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.load_op_library(x1, *args, **kwargs)
is equivalent to
builder.load_op_library(*args, **kwargs)(x1)
tensorflow.load_op_library
Loads a TensorFlow plugin, containing custom ops and kernels.
Pass "library_filename" to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here. When the library is loaded, ops and kernels registered in the library via the REGISTER_* macros are made available in the TensorFlow process. Note that ops with the same name as an existing op are rejected and not registered with the process.
Args: library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.
Returns: A python module containing the Python wrappers for Ops defined in the plugin.
Raises: RuntimeError: when unable to load the library or get the python wrappers.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def local_response_normalization_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.local_response_normalization_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.local_response_normalization_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.local_response_normalization`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def local_response_normalization_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.local_response_normalization_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.local_response_normalization_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.local_response_normalization`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def local_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.local_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.local_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.local_variables(x1, *args, **kwargs)
is equivalent to
builder.local_variables(*args, **kwargs)(x1)
tensorflow.local_variables
Returns all variables created with collection=[LOCAL_VARIABLES].
Returns: A list of local Variable objects.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log(*args, **kwargs)
It accepts the same arguments as tensorflow.log
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.log(x1, *args, **kwargs)
is equivalent to
builder.log(*args, **kwargs)(x1)
tensorflow.log
Computes natural logarithm of x element-wise.
I.e., \(y = \log_e x\).
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_poisson_loss(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_poisson_loss(*args, **kwargs)
It accepts the same arguments as tf.nn.log_poisson_loss
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.log_poisson_loss(x1, *args, **kwargs)
is equivalent to
builder.log_poisson_loss(*args, **kwargs)(x1)
tf.nn.log_poisson_loss
Computes log poisson loss given `log_input`.
Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation.
For brevity, let c = log(x) = log_input
, z = targets
. The log poisson
loss is
-log(exp(-x) * (x^z) / z!) = -log(exp(-x) * (x^z)) + log(z!) ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] [ Note the second term is the Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though important for correct relative loss comparisons. It is only computed when compute_full_loss == True. ] = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Args:
log_input: A Tensor
of type float32
or float64
.
targets: A Tensor
of the same type and shape as log_input
.
compute_full_loss: whether to compute the full loss. If false, a constant
term is dropped in favor of more efficient optimization.
name: A name for the operation (optional).
Returns:
A Tensor
of the same shape as log_input
with the componentwise
logistic losses.
Raises:
ValueError: If log_input
and targets
do not have the same shape.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_poisson_loss_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_poisson_loss_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.log_poisson_loss_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.log_poisson_loss`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_poisson_loss_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_poisson_loss_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.log_poisson_loss_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.log_poisson_loss`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_softmax(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_softmax(*args, **kwargs)
It accepts the same arguments as tf.nn.log_softmax
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.log_softmax(x1, *args, **kwargs)
is equivalent to
builder.log_softmax(*args, **kwargs)(x1)
tf.nn.log_softmax
Computes log softmax activations.
For each batch i
and class j
we have
logsoftmax = logits - reduce_sum(exp(logits), dim)
Args:
logits: A non-empty Tensor
. Must be one of the following types: half
,
float32
, float64
.
dim: The dimension softmax would be performed on. The default is -1 which
indicates the last dimension.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as logits
. Same shape as logits
.
Raises:
InvalidArgumentError: if logits
is empty or dim
is beyond the last
dimension of logits
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_softmax_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_softmax_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.log_softmax_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.log_softmax`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_softmax_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_softmax_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.log_softmax_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.log_softmax`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_uniform_candidate_sampler(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_uniform_candidate_sampler(*args, **kwargs)
It accepts the same arguments as tf.nn.log_uniform_candidate_sampler
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.log_uniform_candidate_sampler(x1, *args, **kwargs)
is equivalent to
builder.log_uniform_candidate_sampler(*args, **kwargs)(x1)
tf.nn.log_uniform_candidate_sampler
Samples a set of classes using a log-uniform (Zipfian) base distribution.
This operation randomly samples a tensor of sampled classes
(sampled_candidates
) from the range of integers [0, range_max)
.
The elements of sampled_candidates
are drawn without replacement
(if unique=True
) or with replacement (if unique=False
) from
the base distribution.
The base distribution for this operation is an approximately log-uniform or Zipfian distribution:
P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
This sampler is useful when the target classes approximately follow such a distribution - for example, if the classes represent words in a lexicon sorted in decreasing order of frequency. If your classes are not ordered by decreasing frequency, do not use this op.
In addition, this operation returns tensors true_expected_count
and sampled_expected_count
representing the number of times each
of the target classes (true_classes
) and the sampled
classes (sampled_candidates
) is expected to occur in an average
tensor of sampled classes. These values correspond to Q(y|x)
defined in this
document.
If unique=True
, then these are post-rejection probabilities and we
compute them approximately.
Args:
true_classes: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
num_true: An int
. The number of target classes per training example.
num_sampled: An int
. The number of classes to randomly sample per batch.
unique: A bool
. Determines whether all sampled classes in a batch are
unique.
range_max: An int
. The number of possible classes.
seed: An int
. An operation-specific seed. Default is 0.
name: A name for the operation (optional).
Returns:
sampled_candidates: A tensor of type int64
and shape [num_sampled]
.
The sampled classes.
true_expected_count: A tensor of type float
. Same shape as
true_classes
. The expected counts under the sampling distribution
of each of true_classes
.
sampled_expected_count: A tensor of type float
. Same shape as
sampled_candidates
. The expected counts under the sampling distribution
of each of sampled_candidates
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_uniform_candidate_sampler_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_uniform_candidate_sampler_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.log_uniform_candidate_sampler_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.log_uniform_candidate_sampler`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def log_uniform_candidate_sampler_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.log_uniform_candidate_sampler_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.log_uniform_candidate_sampler_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.log_uniform_candidate_sampler`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def logical_and(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.logical_and(*args, **kwargs)
It accepts the same arguments as tensorflow.logical_and
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.logical_and(x1, *args, **kwargs)
is equivalent to
builder.logical_and(*args, **kwargs)(x1)
tensorflow.logical_and
Returns the truth value of x AND y element-wise.
NOTE: LogicalAnd
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
of type bool
.
y: A Tensor
of type bool
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def logical_not(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.logical_not(*args, **kwargs)
It accepts the same arguments as tensorflow.logical_not
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.logical_not(x1, *args, **kwargs)
is equivalent to
builder.logical_not(*args, **kwargs)(x1)
tensorflow.logical_not
Returns the truth value of NOT x element-wise.
Args:
x: A Tensor
of type bool
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def logical_or(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.logical_or(*args, **kwargs)
It accepts the same arguments as tensorflow.logical_or
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.logical_or(x1, *args, **kwargs)
is equivalent to
builder.logical_or(*args, **kwargs)(x1)
tensorflow.logical_or
Returns the truth value of x OR y element-wise.
NOTE: LogicalOr
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
of type bool
.
y: A Tensor
of type bool
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def logical_xor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.logical_xor(*args, **kwargs)
It accepts the same arguments as tensorflow.logical_xor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.logical_xor(x1, *args, **kwargs)
is equivalent to
builder.logical_xor(*args, **kwargs)(x1)
tensorflow.logical_xor
x ^ y = (x | y) & ~(x & y).
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def lrn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.lrn(*args, **kwargs)
It accepts the same arguments as tf.nn.lrn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.lrn(x1, *args, **kwargs)
is equivalent to
builder.lrn(*args, **kwargs)(x1)
tf.nn.lrn
Local Response Normalization.
The 4-D input
tensor is treated as a 3-D array of 1-D vectors (along the last
dimension), and each vector is normalized independently. Within a given vector,
each component is divided by the weighted, squared sum of inputs within
depth_radius
. In detail,
sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) output = input / (bias + alpha * sqr_sum) ** beta
For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)] (http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
Args:
input: A Tensor
. Must be one of the following types: float32
, half
.
4-D.
depth_radius: An optional int
. Defaults to 5
.
0-D. Half-width of the 1-D normalization window.
bias: An optional float
. Defaults to 1
.
An offset (usually positive to avoid dividing by 0).
alpha: An optional float
. Defaults to 1
.
A scale factor, usually positive.
beta: An optional float
. Defaults to 0.5
. An exponent.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def lrn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.lrn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.lrn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.lrn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def lrn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.lrn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.lrn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.lrn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def make_all(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_all(*args, **kwargs)
It accepts the same arguments as tf.nn.make_all
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.make_all(x1, *args, **kwargs)
is equivalent to
builder.make_all(*args, **kwargs)(x1)
tf.nn.make_all
Generates `__all__` from the docstring of one or more modules.
Usage: make_all(__name__)
or
make_all(__name__, [sys.modules(__name__), other_module])
. The doc string
modules must each a docstring, and __all__
will contain all symbols with
@@
references, where that symbol currently exists in the module named
module_name
.
Args:
module_name: The name of the module (usually __name__
).
doc_string_modules: a list of modules from which to take docstring.
If None, then a list containing only the module named module_name
is used.
Returns:
A list suitable for use as __all__
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def make_all_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_all_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.make_all_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.make_all`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def make_all_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_all_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.make_all_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.make_all`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def make_audio_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_audio_summary(*args, **kwargs)
It accepts the same arguments as tf.audio_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tf.audio_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.make_audio_summary(x1, *args, **kwargs)(x2)
tf.audio_summary
Outputs a `Summary` protocol buffer with audio.
The summary has up to max_outputs
summary values containing audio. The
audio is built from tensor
which must be 3-D with shape [batch_size,
frames, channels]
or 2-D with shape [batch_size, frames]
. The values are
assumed to be in the range of [-1.0, 1.0]
with a sample rate of
sample_rate
.
The tag
argument is a scalar Tensor
of type string
. It is used to
build the tag
of the summary values:
- If
max_outputs
is 1, the summary value tag is 'tag/audio'. - If
max_outputs
is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc.
Args:
tag: A scalar Tensor
of type string
. Used to build the tag
of the summary values.
tensor: A 3-D float32
Tensor
of shape [batch_size, frames, channels]
or a 2-D float32
Tensor
of shape [batch_size, frames]
.
sample_rate: The sample rate of the signal in hertz.
max_outputs: Max number of batch elements to generate audio for.
collections: Optional list of ops.GraphKeys. The collections to add the
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def make_histogram_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_histogram_summary(*args, **kwargs)
It accepts the same arguments as tf.histogram_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tf.histogram_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.make_histogram_summary(x1, *args, **kwargs)(x2)
tf.histogram_summary
Outputs a `Summary` protocol buffer with a histogram.
The generated
Summary
has one summary value containing a histogram for values
.
This op reports an InvalidArgument
error if any value is not finite.
Args:
tag: A string
Tensor
. 0-D. Tag to use for the summary value.
values: A real numeric Tensor
. Any shape. Values to use to
build the histogram.
collections: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to [GraphKeys.SUMMARIES]
.
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def make_image_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_image_summary(*args, **kwargs)
It accepts the same arguments as tf.image_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tf.image_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.make_image_summary(x1, *args, **kwargs)(x2)
tf.image_summary
Outputs a `Summary` protocol buffer with images.
The summary has up to max_images
summary values containing images. The
images are built from tensor
which must be 4-D with shape [batch_size,
height, width, channels]
and where channels
can be:
- 1:
tensor
is interpreted as Grayscale. - 3:
tensor
is interpreted as RGB. - 4:
tensor
is interpreted as RGBA.
The images have the same number of channels as the input tensor. For float
input, the values are normalized one image at a time to fit in the range
[0, 255]
. uint8
values are unchanged. The op uses two different
normalization algorithms:
-
If the input values are all positive, they are rescaled so the largest one is 255.
-
If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.
The tag
argument is a scalar Tensor
of type string
. It is used to
build the tag
of the summary values:
- If
max_images
is 1, the summary value tag is 'tag/image'. - If
max_images
is greater than 1, the summary value tags are generated sequentially as 'tag/image/0', 'tag/image/1', etc.
Args:
tag: A scalar Tensor
of type string
. Used to build the tag
of the summary values.
tensor: A 4-D uint8
or float32
Tensor
of shape [batch_size, height,
width, channels]
where channels
is 1, 3, or 4.
max_images: Max number of batch elements to generate images for.
collections: Optional list of ops.GraphKeys. The collections to add the
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def make_merge_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_merge_summary(*args, **kwargs)
It accepts the same arguments as tf.merge_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tf.merge_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.make_merge_summary(x1, *args, **kwargs)(x2)
tf.merge_summary
Merges summaries.
This op creates a
Summary
protocol buffer that contains the union of all the values in the input
summaries.
When the Op is run, it reports an InvalidArgument
error if multiple values
in the summaries to merge use the same tag.
Args:
inputs: A list of string
Tensor
objects containing serialized Summary
protocol buffers.
collections: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to [GraphKeys.SUMMARIES]
.
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer resulting from the merging.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def make_scalar_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_scalar_summary(*args, **kwargs)
It accepts the same arguments as tf.scalar_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tf.scalar_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.make_scalar_summary(x1, *args, **kwargs)(x2)
tf.scalar_summary
Outputs a `Summary` protocol buffer with scalar values.
The input tags
and values
must have the same shape. The generated
summary has a summary value for each tag-value pair in tags
and values
.
Args:
tags: A string
Tensor
. Tags for the summaries.
values: A real numeric Tensor. Values for the summaries.
collections: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to [GraphKeys.SUMMARIES]
.
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def make_template(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.make_template(*args, **kwargs)
It accepts the same arguments as tensorflow.make_template
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.make_template(x1, *args, **kwargs)
is equivalent to
builder.make_template(*args, **kwargs)(x1)
tensorflow.make_template
Given an arbitrary function, wrap it so that it does variable sharing.
This wraps func_
in a Template and partially evaluates it. Templates are
functions that create variables the first time they are called and reuse them
thereafter. In order for func_
to be compatible with a Template
it must
have the following properties:
- The function should create all trainable variables and any variables that
should be reused by calling
tf.get_variable
. If a trainable variable is created usingtf.Variable
, then a ValueError will be thrown. Variables that are intended to be locals can be created by specifyingtf.Variable(..., trainable=false)
. - The function may use variable scopes and other templates internally to
create and reuse variables, but it shouldn't use
tf.get_variables
to capture variables that are defined outside of the scope of the function. - Internal scopes and variable names should not depend on any arguments that
are not supplied to
make_template
. In general you will get a ValueError telling you that you are trying to reuse a variable that doesn't exist if you make a mistake.
In the following example, both z
and w
will be scaled by the same y
. It
is important to note that if we didn't assign scalar_name
and used a
different name for z and w that a ValueError
would be thrown because it
couldn't reuse the variable.
```python def my_op(x, scalar_name): var1 = tf.get_variable(scalar_name, shape=[], initializer=tf.constant_initializer(1)) return x * var1
scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')
z = scale_by_y(input1) w = scale_by_y(input2) ```
As a safe-guard, the returned function will raise a ValueError
after the
first call if trainable variables are created by calling tf.Variable
.
If all of these are true, then 2 properties are enforced by the template:
- Calling the same template multiple times will share all non-local variables.
- Two different templates are guaranteed to be unique, unless you reenter the same variable scope as the initial definition of a template and redefine it. An examples of this exception:
```python def my_op(x, scalar_name): var1 = tf.get_variable(scalar_name, shape=[], initializer=tf.constant_initializer(1)) return x * var1
with tf.variable_scope('scope') as vs: scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2)
Creates a template that reuses the variables above.
with tf.variable_scope(vs, reuse=True): scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y') z2 = scale_by_y2(input1) w2 = scale_by_y2(input2) ```
Depending on the value of create_scope_now_
, the full variable scope may be
captured either at the time of first call or at the time of construction. If
this option is set to True, then all Tensors created by repeated calls to the
template will have an extra trailing _N+1 to their name, as the first time the
scope is entered in the Template constructor no Tensors are created.
Note: name_
, func_
and create_scope_now_
have a trailing underscore to
reduce the likelihood of collisions with kwargs.
Args:
name_: A name for the scope created by this template. If necessary, the name
will be made unique by appending _N
to the name.
func_: The function to wrap.
create_scope_now_: Boolean controlling whether the scope should be created
when the template is constructed or when the template is called. Default
is False, meaning the scope is created when the template is called.
unique_name_: When used, it overrides name_ and is not made unique. If a
template of the same scope/unique_name already exists and reuse is false,
an error is raised. Defaults to None.
**kwargs: Keyword arguments to apply to func_
.
Returns:
A function to encapsulate a set of variables which should be created once
and reused. An enclosing scope will created, either where make_template
is called, or wherever the result is called, depending on the value of
create_scope_now_
. Regardless of the value, the first time the template
is called it will enter the scope with no reuse, and call func_
to create
variables, which are guaranteed to be unique. All subsequent calls will
re-enter the scope and reuse those variables.
Raises: ValueError: if the name is None.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def map_fn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.map_fn(*args, **kwargs)
It accepts the same arguments as tensorflow.map_fn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.map_fn(x1, *args, **kwargs)
is equivalent to
builder.map_fn(*args, **kwargs)(x1)
tensorflow.map_fn
map on the list of tensors unpacked from `elems` on dimension 0.
The simplest version of map
repeatedly applies the callable fn
to a
sequence of elements from first to last. The elements are made of the
tensors unpacked from elems
. dtype
is the data type of the return
value of fn
. Users must provide dtype
if it is different from
the data type of elems
.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is [values.shape[0]] + fn(values[0]).shape
.
This method also allows multi-arity elems
and output of fn
. If elems
is a (possibly nested) list or tuple of tensors, then each of these tensors
must have a matching first (unpack) dimension. The signature of fn
may
match the structure of elems
. That is, if elems
is
(t1, [t2, t3, [t4, t5]])
, then an appropriate signature for fn
is:
fn = lambda (t1, [t2, t3, [t4, t5]]):
.
Furthermore, fn
may emit a different structure than its input. For example,
fn
may look like: fn = lambda t1: return (t1 + 1, t1 - 1)
. In this case,
the dtype
parameter is not optional: dtype
must be a type or (possibly
nested) tuple of types matching the output of fn
.
Args:
fn: The callable to be performed. It accepts one argument, which will
have the same (possibly nested) structure as elems
. Its output
must have the same structure as dtype
if one is provided, otherwise
it must have the same structure as elems
.
elems: A tensor or (possibly nested) sequence of tensors, each of which
will be unpacked along their first dimension. The nested sequence
of the resulting slices will be applied to fn
.
dtype: (optional) The output type(s) of fn
. If fn
returns a structure
of Tensors differing from the structure of elems
, then dtype
is not
optional and must have the same structure as the output of fn
.
parallel_iterations: (optional) The number of iterations allowed to run
in parallel.
back_prop: (optional) True enables support for back propagation.
swap_memory: (optional) True enables GPU-CPU memory swapping.
infer_shape: (optional) False disables tests for consistent output shapes.
name: (optional) Name prefix for the returned tensors.
Returns:
A tensor or (possibly nested) sequence of tensors. Each tensor packs the
results of applying fn
to tensors unpacked from elems
along the first
dimension, from first to last.
Raises:
TypeError: if fn
is not callable or the structure of the output of
fn
and dtype
do not match.
ValueError: if the lengths of the output of fn
and dtype
do not match.
Examples:
python
elems = np.array([1, 2, 3, 4, 5, 6])
squares = map_fn(lambda x: x * x, elems)
# squares == [1, 4, 9, 16, 25, 36]
python
elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)
# alternate == [-1, 2, -3]
python
elems = np.array([1, 2, 3])
alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64))
# alternates[0] == [1, 2, 3]
# alternates[1] == [-1, -2, -3]
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matching_files(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matching_files(*args, **kwargs)
It accepts the same arguments as tensorflow.matching_files
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matching_files(x1, *args, **kwargs)
is equivalent to
builder.matching_files(*args, **kwargs)(x1)
tensorflow.matching_files
Returns the set of files matching a pattern.
Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion.
Args:
pattern: A Tensor
of type string
. A (scalar) shell wildcard pattern.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
. A vector of matching filenames.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matmul(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matmul(*args, **kwargs)
It accepts the same arguments as tensorflow.matmul
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matmul(x1, *args, **kwargs)
is equivalent to
builder.matmul(*args, **kwargs)(x1)
tensorflow.matmul
Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
The inputs must be two-dimensional matrices, with matching inner dimensions, possibly after transposition.
Both matrices must be of the same type. The supported types are:
float32
, float64
, int32
, complex64
.
Either matrix can be transposed on the fly by setting the corresponding flag
to True
. This is False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
For example:
```python
2-D tensor a
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.] [4. 5. 6.]]
2-D tensor b
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.] [9. 10.] [11. 12.]] c = tf.matmul(a, b) => [[58 64] [139 154]] ```
Args:
a: Tensor
of type float32
, float64
, int32
or complex64
.
b: Tensor
with same type as a
.
transpose_a: If True
, a
is transposed before multiplication.
transpose_b: If True
, b
is transposed before multiplication.
a_is_sparse: If True
, a
is treated as a sparse matrix.
b_is_sparse: If True
, b
is treated as a sparse matrix.
name: Name for the operation (optional).
Returns:
A Tensor
of the same type as a
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_band_part(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_band_part(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_band_part
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_band_part(x1, *args, **kwargs)
is equivalent to
builder.matrix_band_part(*args, **kwargs)(x1)
tensorflow.matrix_band_part
Copy a tensor setting everything outside a central band in each innermost matrix
to zero.
The band
part is computed as follows:
Assume input
has k
dimensions [I, J, K, ..., M, N]
, then the output is a
tensor with the same shape where
band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]
.
The indicator function 'in_band(m, n)is one if
(num_lower < 0 || (m-n) <= num_lower)) &&
(num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise.
For example:
```prettyprint
if 'input' is [[ 0, 1, 2, 3]
[-1, 0, 1, 2] [-2, -1, 0, 1] [-3, -2, -1, 0]],
tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] [-1, 0, 1, 2] [ 0, -1, 0, 1] [ 0, 0, -1, 0]],
tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] [-1, 0, 1, 0] [-2, -1, 0, 1] [ 0, -2, -1, 0]] ```
Useful special cases:
prettyprint
tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal.
Args:
input: A Tensor
. Rank k
tensor.
num_lower: A Tensor
of type int64
.
0-D tensor. Number of subdiagonals to keep. If negative, keep entire
lower triangle.
num_upper: A Tensor
of type int64
.
0-D tensor. Number of superdiagonals to keep. If negative, keep
entire upper triangle.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Rank k
tensor of the same shape as input. The extracted banded tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_determinant(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_determinant(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_determinant
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_determinant(x1, *args, **kwargs)
is equivalent to
builder.matrix_determinant(*args, **kwargs)(x1)
tensorflow.matrix_determinant
Computes the determinant of one ore more square matrices.
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices. The output is a tensor containing the determinants
for all input submatrices [..., :, :]
.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
.
Shape is [..., M, M]
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [...]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_diag(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_diag(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_diag
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_diag(x1, *args, **kwargs)
is equivalent to
builder.matrix_diag(*args, **kwargs)(x1)
tensorflow.matrix_diag
Returns a batched diagonal tensor with a given batched diagonal values.
Given a diagonal
, this operation returns a tensor with the diagonal
and
everything else padded with zeros. The diagonal is computed as follows:
Assume diagonal
has k
dimensions [I, J, K, ..., N]
, then the output is a
tensor of rank k+1
with dimensions [I, J, K, ..., N, N]` where:
output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]
.
For example:
```prettyprint
'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
and diagonal.shape = (2, 4)
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]], [[5, 0, 0, 0] [0, 6, 0, 0] [0, 0, 7, 0] [0, 0, 0, 8]]]
which has shape (2, 4, 4) ```
Args:
diagonal: A Tensor
. Rank k
, where k >= 1
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as diagonal
.
Rank k+1
, with output.shape = diagonal.shape + [diagonal.shape[-1]]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_diag_part(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_diag_part(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_diag_part
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_diag_part(x1, *args, **kwargs)
is equivalent to
builder.matrix_diag_part(*args, **kwargs)(x1)
tensorflow.matrix_diag_part
Returns the batched diagonal part of a batched tensor.
This operation returns a tensor with the diagonal
part
of the batched input
. The diagonal
part is computed as follows:
Assume input
has k
dimensions [I, J, K, ..., N, N]
, then the output is a
tensor of rank k - 1
with dimensions [I, J, K, ..., N]
where:
diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]
.
The input must be at least a matrix.
For example:
```prettyprint
'input' is [[[1, 0, 0, 0]
[0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]], [[5, 0, 0, 0] [0, 6, 0, 0] [0, 0, 7, 0] [0, 0, 0, 8]]]
and input.shape = (2, 4, 4)
tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]
which has shape (2, 4) ```
Args:
input: A Tensor
.
Rank k
tensor where k >= 2
and the last two dimensions are equal.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
The extracted diagonal(s) having shape
diagonal.shape = input.shape[:-1]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_inverse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_inverse(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_inverse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_inverse(x1, *args, **kwargs)
is equivalent to
builder.matrix_inverse(*args, **kwargs)(x1)
tensorflow.matrix_inverse
Computes the inverse of one or more square invertible matrices or their
adjoints (conjugate transposes).
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices. The output is a tensor of the same shape as the input
containing the inverse for all input submatrices [..., :, :]
.
The op uses LU decomposition with partial pivoting to compute the inverses.
If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.
Args:
input: A Tensor
. Must be one of the following types: float64
, float32
.
Shape is [..., M, M]
.
adjoint: An optional bool
. Defaults to False
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M, M]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_set_diag(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_set_diag(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_set_diag
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_set_diag(x1, *args, **kwargs)
is equivalent to
builder.matrix_set_diag(*args, **kwargs)(x1)
tensorflow.matrix_set_diag
Returns a batched matrix tensor with new batched diagonal values.
Given input
and diagonal
, this operation returns a tensor with the
same shape and values as input
, except for the diagonals of the innermost
matrices. These will be overwritten by the values in diagonal
.
The batched matrices must be square.
The output is computed as follows:
Assume input
has k+1
dimensions [I, J, K, ..., N, N]
and diagonal
has
k
dimensions [I, J, K, ..., N]
. Then the output is a
tensor of rank k+1
with dimensions [I, J, K, ..., N, N]` where:
output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]
form == n
.output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]
form != n
.
Args:
input: A Tensor
. Rank k+1
, where k >= 1
.
diagonal: A Tensor
. Must have the same type as input
.
Rank k
, where k >= 1
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Rank k+1
, with output.shape = input.shape
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_solve(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_solve(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_solve
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_solve(x1, *args, **kwargs)
is equivalent to
builder.matrix_solve(*args, **kwargs)(x1)
tensorflow.matrix_solve
Solves systems of linear equations.
Matrix
is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices. Rhs
is a tensor of shape [..., M, K]
. The output
is
a tensor shape [..., M, K]
. If adjoint
is False
then each output matrix
satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
.
If adjoint
is True
then each output matrix satisfies
adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]
.
Args:
matrix: A Tensor
. Must be one of the following types: float64
, float32
.
Shape is [..., M, M]
.
rhs: A Tensor
. Must have the same type as matrix
.
Shape is [..., M, K]
.
adjoint: An optional bool
. Defaults to False
.
Boolean indicating whether to solve with matrix
or its (block-wise)
adjoint.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [..., M, K]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_solve_ls(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_solve_ls(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_solve_ls
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_solve_ls(x1, *args, **kwargs)
is equivalent to
builder.matrix_solve_ls(*args, **kwargs)(x1)
tensorflow.matrix_solve_ls
Solves one or more linear least-squares problems.
matrix
is a tensor of shape [..., M, N]
whose inner-most 2 dimensions
form M
-by-N
matrices. Rhs is a tensor of shape [..., M, K]
whose
inner-most 2 dimensions form M
-by-K
matrices. The computed output is a
Tensor
of shape [..., N, K]
whose inner-most 2 dimensions form M
-by-K
matrices that solve the equations
matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
in the least squares
sense.
Below we will use the following notation for each pair of matrix and right-hand sides in the batch:
matrix
=\(A \in \Re^{m \times n}\),
rhs
=\(B \in \Re^{m \times k}\),
output
=\(X \in \Re^{n \times k}\),
l2_regularizer
=\(\lambda\).
If fast
is True
, then the solution is computed by solving the normal
equations using Cholesky decomposition. Specifically, if \(m \ge n\) then
\(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares
problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||A Z - B||_F^2 +
\lambda ||Z||_F^2\). If \(m \lt n\) then output
is computed as
\(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is
the minimum-norm solution to the under-determined linear system, i.e.
\(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z||F^2 \), subject to
\(A Z = B\). Notice that the fast path is only numerically stable when
\(A\) is numerically full rank and has a condition number
\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\)
is sufficiently large.
If fast
is False
an algorithm based on the numerically robust complete
orthogonal decomposition is used. This computes the minimum-norm
least-squares solution, even when \(A\) is rank deficient. This path is
typically 6-7 times slower than the fast path. If fast
is False
then
l2_regularizer
is ignored.
Args:
matrix: Tensor
of shape [..., M, N]
.
rhs: Tensor
of shape [..., M, K]
.
l2_regularizer: 0-D double
Tensor
. Ignored if fast=False
.
fast: bool. Defaults to True
.
name: string, optional name of the operation.
Returns:
output: Tensor
of shape [..., N, K]
whose inner-most 2 dimensions form
M
-by-K
matrices that solve the equations
matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
in the least
squares sense.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_transpose(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_transpose(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_transpose
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_transpose(x1, *args, **kwargs)
is equivalent to
builder.matrix_transpose(*args, **kwargs)(x1)
tensorflow.matrix_transpose
Transposes last two dimensions of tensor `a`.
For example:
```python
Matrix with no batch dimension.
'x' is [[1 2 3]
[4 5 6]]
tf.matrix_transpose(x) ==> [[1 4] [2 5] [3 6]]
Matrix with two batch dimensions.
x.shape is [1, 2, 3, 4]
tf.matrix_transpose(x) is shape [1, 2, 4, 3]
```
Args:
a: A Tensor
with rank >= 2
.
name: A name for the operation (optional).
Returns:
A transposed batch matrix Tensor
.
Raises:
ValueError: If a
is determined statically to have rank < 2
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def matrix_triangular_solve(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.matrix_triangular_solve(*args, **kwargs)
It accepts the same arguments as tensorflow.matrix_triangular_solve
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.matrix_triangular_solve(x1, *args, **kwargs)
is equivalent to
builder.matrix_triangular_solve(*args, **kwargs)(x1)
tensorflow.matrix_triangular_solve
Solves systems of linear equations with upper or lower triangular matrices by
backsubstitution.
matrix
is a tensor of shape [..., M, M]
whose inner-most 2 dimensions form
square matrices. If lower
is True
then the strictly upper triangular part
of each inner-most matrix is assumed to be zero and not accessed.
If lower
is False then the strictly lower triangular part of each inner-most
matrix is assumed to be zero and not accessed.
rhs
is a tensor of shape [..., M, K]
.
The output is a tensor of shape [..., M, K]
. If adjoint
is
True
then the innermost matrices in outputsatisfy matrix equations
matrix[..., :, :] * output[..., :, :] = rhs[..., :, :].
If
adjointis
Falsethen the strictly then the innermost matrices in
outputsatisfy matrix equations
adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.
Args:
matrix: A Tensor
. Must be one of the following types: float64
, float32
.
Shape is [..., M, M]
.
rhs: A Tensor
. Must have the same type as matrix
.
Shape is [..., M, K]
.
lower: An optional bool
. Defaults to True
.
Boolean indicating whether the innermost matrices in matrix
are
lower or upper triangular.
adjoint: An optional bool
. Defaults to False
.
Boolean indicating whether to solve with matrix
or its (block-wise)
adjoint.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [..., M, K]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool(*args, **kwargs)
It accepts the same arguments as tf.nn.max_pool
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.max_pool(x1, *args, **kwargs)
is equivalent to
builder.max_pool(*args, **kwargs)(x1)
tf.nn.max_pool
Performs the max pooling on the input.
Args:
value: A 4-D Tensor
with shape [batch, height, width, channels]
and
type tf.float32
.
ksize: A list of ints that has length >= 4. The size of the window for
each dimension of the input tensor.
strides: A list of ints that has length >= 4. The stride of the sliding
window for each dimension of the input tensor.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
See the comment here
data_format: A string. 'NHWC' and 'NCHW' are supported.
name: Optional name for the operation.
Returns:
A Tensor
with type tf.float32
. The max pooled output tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool2d(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.max_pool2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.max_pool2d(x1, *args, **kwargs)
is equivalent to
builder.max_pool2d(*args, **kwargs)(x1)
tf.contrib.layers.max_pool2d
Adds a 2D Max Pooling op.
It is assumed that the pooling is done per image but not in batch or channels.
Args:
inputs: A Tensor
of size [batch_size, height, width, channels].
kernel_size: A list of length 2: [kernel_height, kernel_width] of the
pooling kernel over which the op is computed. Can be an int if both
values are the same.
stride: A list of length 2: [stride_height, stride_width].
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: The padding method, either 'VALID' or 'SAME'.
outputs_collections: The collections to which the outputs are added.
scope: Optional scope for name_scope.
Returns:
A Tensor
representing the results of the pooling operation.
Raises: ValueError: If 'kernel_size' is not a 2-D list
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool3d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool3d(*args, **kwargs)
It accepts the same arguments as tf.nn.max_pool3d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.max_pool3d(x1, *args, **kwargs)
is equivalent to
builder.max_pool3d(*args, **kwargs)(x1)
tf.nn.max_pool3d
Performs 3D max pooling on the input.
Args:
input: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Shape [batch, depth, rows, cols, channels]
tensor to pool over.
ksize: A list of ints
that has length >= 5
.
1-D tensor of length 5. The size of the window for each dimension of
the input tensor. Must have ksize[0] = ksize[4] = 1
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. The max pooled output tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool3d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool3d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.max_pool3d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool3d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool3d_grad(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool3d_grad(*args, **kwargs)
It accepts the same arguments as tf.nn.max_pool3d_grad
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.max_pool3d_grad(x1, *args, **kwargs)
is equivalent to
builder.max_pool3d_grad(*args, **kwargs)(x1)
tf.nn.max_pool3d_grad
Computes gradients of max pooling function.
Args:
orig_input: A Tensor
of type float32
. The original input tensor.
orig_output: A Tensor
of type float32
. The original output tensor.
grad: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Output backprop of shape [batch, depth, rows, cols, channels]
.
ksize: A list of ints
that has length >= 5
.
1-D tensor of length 5. The size of the window for each dimension of
the input tensor. Must have ksize[0] = ksize[4] = 1
.
strides: A list of ints
that has length >= 5
.
1-D tensor of length 5. The stride of the sliding window for each
dimension of input
. Must have strides[0] = strides[4] = 1
.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as grad
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool3d_grad_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool3d_grad_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.max_pool3d_grad_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool3d_grad`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool3d_grad_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool3d_grad_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.max_pool3d_grad_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool3d_grad`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool3d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool3d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.max_pool3d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool3d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.max_pool_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.max_pool_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool_with_argmax(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool_with_argmax(*args, **kwargs)
It accepts the same arguments as tf.nn.max_pool_with_argmax
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.max_pool_with_argmax(x1, *args, **kwargs)
is equivalent to
builder.max_pool_with_argmax(*args, **kwargs)(x1)
tf.nn.max_pool_with_argmax
Performs max pooling on the input and outputs both max values and indices.
The indices in argmax
are flattened, so that a maximum value at position
[b, y, x, c]
becomes flattened index
((b * height + y) * width + x) * channels + c
.
Args:
input: A Tensor
. Must be one of the following types: float32
, half
.
4-D with shape [batch, height, width, channels]
. Input to pool over.
ksize: A list of ints
that has length >= 4
.
The size of the window for each dimension of the input tensor.
strides: A list of ints
that has length >= 4
.
The stride of the sliding window for each dimension of the
input tensor.
padding: A string
from: "SAME", "VALID"
.
The type of padding algorithm to use.
Targmax: An optional tf.DType
from: tf.int32, tf.int64
. Defaults to tf.int64
.
name: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (output, argmax).
output: A Tensor
. Has the same type as input
. The max pooled output tensor.
argmax: A Tensor
of type Targmax
. 4-D. The flattened indices of the max values chosen for each output.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool_with_argmax_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool_with_argmax_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.max_pool_with_argmax_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool_with_argmax`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def max_pool_with_argmax_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.max_pool_with_argmax_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.max_pool_with_argmax_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.max_pool_with_argmax`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def maximize(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.maximize(*args, **kwargs)
It accepts the same arguments as tb.maximize
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.maximize(x1, *args, **kwargs)
is equivalent to
builder.maximize(*args, **kwargs)(x1)
tb.maximize
None
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def maximum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.maximum(*args, **kwargs)
It accepts the same arguments as tensorflow.maximum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.maximum(x1, *args, **kwargs)
is equivalent to
builder.maximum(*args, **kwargs)(x1)
tensorflow.maximum
Returns the max of x and y (i.e. x > y ? x : y) element-wise.
NOTE: Maximum
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def merge_all_summaries(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.merge_all_summaries(*args, **kwargs)
It accepts the same arguments as tensorflow.merge_all_summaries
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.merge_all_summaries(x1, *args, **kwargs)
is equivalent to
builder.merge_all_summaries(*args, **kwargs)(x1)
tensorflow.merge_all_summaries
Merges all summaries collected in the default graph.
Args:
key: GraphKey
used to collect the summaries. Defaults to
GraphKeys.SUMMARIES
.
Returns:
If no summaries were collected, returns None. Otherwise returns a scalar
Tensor
of type string
containing the serialized Summary
protocol
buffer resulting from the merging.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def merge_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.merge_summary(*args, **kwargs)
It accepts the same arguments as tensorflow.merge_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tensorflow.merge_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.merge_summary(x1, *args, **kwargs)(x2)
tensorflow.merge_summary
Merges summaries.
This op creates a
Summary
protocol buffer that contains the union of all the values in the input
summaries.
When the Op is run, it reports an InvalidArgument
error if multiple values
in the summaries to merge use the same tag.
Args:
inputs: A list of string
Tensor
objects containing serialized Summary
protocol buffers.
collections: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to [GraphKeys.SUMMARIES]
.
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer resulting from the merging.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def meshgrid(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.meshgrid(*args, **kwargs)
It accepts the same arguments as tensorflow.meshgrid
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.meshgrid(x1, *args, **kwargs)
is equivalent to
builder.meshgrid(*args, **kwargs)(x1)
tensorflow.meshgrid
Broadcasts parameters for evaluation on an N-D grid.
Given N one-dimensional coordinate arrays *args
, returns a list outputs
of N-D coordinate arrays for evaluating expressions on an N-D grid.
Notes:
meshgrid
supports cartesian ('xy') and matrix ('ij') indexing conventions.
When the indexing
argument is set to 'xy' (the default), the broadcasting
instructions for the first two dimensions are swapped.
Examples:
Calling X, Y = meshgrid(x, y)
with the tensors
prettyprint
x = [1, 2, 3]
y = [4, 5, 6]
results in
prettyprint
X = [[1, 1, 1],
[2, 2, 2],
[3, 3, 3]]
Y = [[4, 5, 6],
[4, 5, 6],
[4, 5, 6]]
Args:
*args: Tensor
s with rank 1
indexing: Either 'xy' or 'ij' (optional, default: 'xy')
name: A name for the operation (optional).
Returns:
outputs: A list of N Tensor
s with rank N
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def min_max_variable_partitioner(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.min_max_variable_partitioner(*args, **kwargs)
It accepts the same arguments as tensorflow.min_max_variable_partitioner
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.min_max_variable_partitioner(x1, *args, **kwargs)
is equivalent to
builder.min_max_variable_partitioner(*args, **kwargs)(x1)
tensorflow.min_max_variable_partitioner
Partitioner to allocate minimum size per slice.
Returns a partitioner that partitions the variable of given shape and dtype
such that each partition has a minimum of min_slice_size
slice of the
variable. The maximum number of such partitions (upper bound) is given by
max_partitions
.
Args:
max_partitions: Upper bound on the number of partitions. Defaults to 1.
axis: Axis along which to partition the variable. Defaults to 0.
min_slice_size: Minimum size of the variable slice per partition. Defaults
to 256K.
bytes_per_string_element: If the Variable
is of type string, this provides
an estimate of how large each scalar in the Variable
is.
Returns:
A partition function usable as the partitioner
argument to
variable_scope
, get_variable
, and get_partitioned_variable_list
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def minimize(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.minimize(*args, **kwargs)
It accepts the same arguments as tb.minimize
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.minimize(x1, *args, **kwargs)
is equivalent to
builder.minimize(*args, **kwargs)(x1)
tb.minimize
None
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def minimum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.minimum(*args, **kwargs)
It accepts the same arguments as tensorflow.minimum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.minimum(x1, *args, **kwargs)
is equivalent to
builder.minimum(*args, **kwargs)(x1)
tensorflow.minimum
Returns the min of x and y (i.e. x < y ? x : y) element-wise.
NOTE: Minimum
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def mod(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.mod(*args, **kwargs)
It accepts the same arguments as tensorflow.mod
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.mod(x1, *args, **kwargs)
is equivalent to
builder.mod(*args, **kwargs)(x1)
tensorflow.mod
Returns element-wise remainder of division.
NOTE: Mod
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: int32
, int64
, float32
, float64
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def model_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.model_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.model_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.model_variables(x1, *args, **kwargs)
is equivalent to
builder.model_variables(*args, **kwargs)(x1)
tensorflow.model_variables
Returns all variables in the MODEL_VARIABLES collection.
Returns: A list of local Variable objects.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def moments(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.moments(*args, **kwargs)
It accepts the same arguments as tf.nn.moments
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.moments(x1, *args, **kwargs)
is equivalent to
builder.moments(*args, **kwargs)(x1)
tf.nn.moments
Calculate the mean and variance of `x`.
The mean and variance are calculated by aggregating the contents of x
across axes
. If x
is 1-D and axes = [0]
this is just the mean
and variance of a vector.
When using these moments for batch normalization (see
tf.nn.batch_normalization
):
* for so-called "global normalization", used with convolutional filters with
shape [batch, height, width, depth]
, pass axes=[0, 1, 2]
.
* for simple batch normalization pass axes=[0]
(batch only).
Args:
x: A Tensor
.
axes: array of ints. Axes along which to compute mean and
variance.
shift: A Tensor
containing the value by which to shift the data for
numerical stability, or None
if no shift is to be performed. A shift
close to the true mean provides the most numerically stable results.
name: Name used to scope the operations that compute the moments.
keep_dims: produce moments with the same dimensionality as the input.
Returns:
Two Tensor
objects: mean
and variance
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def moments_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.moments_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.moments_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.moments`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def moments_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.moments_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.moments_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.moments`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def moving_average_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.moving_average_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.moving_average_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.moving_average_variables(x1, *args, **kwargs)
is equivalent to
builder.moving_average_variables(*args, **kwargs)(x1)
tensorflow.moving_average_variables
Returns all variables that maintain their moving averages.
If an ExponentialMovingAverage
object is created and the apply()
method is called on a list of variables, these variables will
be added to the GraphKeys.MOVING_AVERAGE_VARIABLES
collection.
This convenience function returns the contents of that collection.
Returns: A list of Variable objects.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def mul(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.mul(*args, **kwargs)
It accepts the same arguments as tensorflow.mul
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.mul(x1, *args, **kwargs)
is equivalent to
builder.mul(*args, **kwargs)(x1)
tensorflow.mul
Returns x * y element-wise.
NOTE: Mul
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, uint8
, int8
, uint16
, int16
, int32
, int64
, complex64
, complex128
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def multinomial(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.multinomial(*args, **kwargs)
It accepts the same arguments as tensorflow.multinomial
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.multinomial(x1, *args, **kwargs)
is equivalent to
builder.multinomial(*args, **kwargs)(x1)
tensorflow.multinomial
Draws samples from a multinomial distribution.
Example:
```python
samples has shape [1, 5], where each value is either 0 or 1 with equal
probability.
samples = tf.multinomial(tf.log([[10., 10.]]), 5) ```
Args:
logits: 2-D Tensor with shape [batch_size, num_classes]
. Each slice
[i, :]
represents the unnormalized log probabilities for all classes.
num_samples: 0-D. Number of independent samples to draw for each row slice.
seed: A Python integer. Used to create a random seed for the distribution.
See
set_random_seed
for behavior.
name: Optional name for the operation.
Returns:
The drawn samples of shape [batch_size, num_samples]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def name_scope(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.name_scope(*args, **kwargs)
It accepts the same arguments as tensorflow.name_scope
.
However, a partial with the arguments is returned which expects any argument x
and complete ignores it, such that
tensorflow.name_scope(*args, **kwargs)
is equivalent to
builder.name_scope(*args, **kwargs)(x)
tensorflow.name_scope
Returns a context manager for use when defining a Python op.
This context manager validates that the given values
are from the
same graph, makes that graph the default graph, and pushes a
name scope in that graph (see
Graph.name_scope()
for more details on that).
For example, to define a new Python op called my_op
:
python
def my_op(a, b, c, name=None):
with tf.name_scope(name, "MyOp", [a, b, c]) as scope:
a = tf.convert_to_tensor(a, name="a")
b = tf.convert_to_tensor(b, name="b")
c = tf.convert_to_tensor(c, name="c")
# Define some computation that uses `a`, `b`, and `c`.
return foo_op(..., name=scope)
Args:
name: The name argument that is passed to the op function.
default_name: The default name to use if the name
argument is None
.
values: The list of Tensor
arguments that are passed to the op function.
Returns: A context manager for use in defining Python ops. Yields the name scope.
Raises:
ValueError: if neither name
nor default_name
is provided
but values
are.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs)
def nce_loss(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.nce_loss(*args, **kwargs)
It accepts the same arguments as tf.nn.nce_loss
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.nce_loss(x1, *args, **kwargs)
is equivalent to
builder.nce_loss(*args, **kwargs)(x1)
tf.nn.nce_loss
Computes and returns the noise-contrastive estimation training loss.
See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models] (http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). Also see our [Candidate Sampling Algorithms Reference] (../../extras/candidate_sampling.pdf)
Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see log_uniform_candidate_sampler.
Note: In the case where num_true
> 1, we assign to each target class
the target probability 1 / num_true
so that the target probabilities
sum to 1 per-example.
Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.
Args:
weights: A Tensor
of shape [num_classes, dim]
, or a list of Tensor
objects whose concatenation along dimension 0 has shape
[num_classes, dim]. The (possibly-partitioned) class embeddings.
biases: A Tensor
of shape [num_classes]
. The class biases.
inputs: A Tensor
of shape [batch_size, dim]
. The forward
activations of the input network.
labels: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
num_sampled: An int
. The number of classes to randomly sample per batch.
num_classes: An int
. The number of possible classes.
num_true: An int
. The number of target classes per training example.
sampled_values: a tuple of (sampled_candidates
, true_expected_count
,
sampled_expected_count
) returned by a *_candidate_sampler
function.
(if None, we default to log_uniform_candidate_sampler
)
remove_accidental_hits: A bool
. Whether to remove "accidental hits"
where a sampled class equals one of the target classes. If set to
True
, this is a "Sampled Logistic" loss instead of NCE, and we are
learning to generate log-odds instead of log probabilities. See
our [Candidate Sampling Algorithms Reference]
(../../extras/candidate_sampling.pdf).
Default is False.
partition_strategy: A string specifying the partitioning strategy, relevant
if len(weights) > 1
. Currently "div"
and "mod"
are supported.
Default is "mod"
. See tf.nn.embedding_lookup
for more details.
name: A name for the operation (optional).
Returns:
A batch_size
1-D tensor of per-example NCE losses.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def nce_loss_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.nce_loss_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.nce_loss_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.nce_loss`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def nce_loss_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.nce_loss_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.nce_loss_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.nce_loss`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def neg(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.neg(*args, **kwargs)
It accepts the same arguments as tensorflow.neg
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.neg(x1, *args, **kwargs)
is equivalent to
builder.neg(*args, **kwargs)(x1)
tensorflow.neg
Computes numerical negative value element-wise.
I.e., (y = -x).
Args:
x: A Tensor
or SparseTensor
. Must be one of the following types: half
,
float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
, respectively. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def no_op(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.no_op(*args, **kwargs)
It accepts the same arguments as tensorflow.no_op
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.no_op(x1, *args, **kwargs)
is equivalent to
builder.no_op(*args, **kwargs)(x1)
tensorflow.no_op
Does nothing. Only useful as a placeholder for control edges.
Args: name: A name for the operation (optional).
Returns: The created Operation.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def no_regularizer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.no_regularizer(*args, **kwargs)
It accepts the same arguments as tensorflow.no_regularizer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.no_regularizer(x1, *args, **kwargs)
is equivalent to
builder.no_regularizer(*args, **kwargs)(x1)
tensorflow.no_regularizer
Use this function to prevent regularization of variables.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def normalize_moments(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.normalize_moments(*args, **kwargs)
It accepts the same arguments as tf.nn.normalize_moments
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.normalize_moments(x1, *args, **kwargs)
is equivalent to
builder.normalize_moments(*args, **kwargs)(x1)
tf.nn.normalize_moments
Calculate the mean and variance of based on the sufficient statistics.
Args:
counts: A Tensor
containing a the total count of the data (one value).
mean_ss: A Tensor
containing the mean sufficient statistics: the (possibly
shifted) sum of the elements to average over.
variance_ss: A Tensor
containing the variance sufficient statistics: the
(possibly shifted) squared sum of the data to compute the variance over.
shift: A Tensor
containing the value by which the data is shifted for
numerical stability, or None
if no shift was performed.
name: Name used to scope the operations that compute the moments.
Returns:
Two Tensor
objects: mean
and variance
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def normalize_moments_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.normalize_moments_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.normalize_moments_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.normalize_moments`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def normalize_moments_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.normalize_moments_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.normalize_moments_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.normalize_moments`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def not_equal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.not_equal(*args, **kwargs)
It accepts the same arguments as tensorflow.not_equal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.not_equal(x1, *args, **kwargs)
is equivalent to
builder.not_equal(*args, **kwargs)(x1)
tensorflow.not_equal
Returns the truth value of (x != y) element-wise.
NOTE: NotEqual
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, uint8
, int8
, int16
, int32
, int64
, complex64
, quint8
, qint8
, qint32
, string
, bool
, complex128
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
of type bool
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def one_hot(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.one_hot(*args, **kwargs)
It accepts the same arguments as tensorflow.one_hot
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.one_hot(x1, *args, **kwargs)
is equivalent to
builder.one_hot(*args, **kwargs)(x1)
tensorflow.one_hot
Returns a one-hot tensor.
The locations represented by indices in indices
take value on_value
,
while all other locations take value off_value
.
on_value
and off_value
must have matching data types. If dtype
is also
provided, they must be the same data type as specified by dtype
.
If on_value
is not provided, it will default to the value 1
with type
dtype
If off_value
is not provided, it will default to the value 0
with type
dtype
If the input indices
is rank N
, the output will have rank N+1
. The
new axis is created at dimension axis
(default: the new axis is appended
at the end).
If indices
is a scalar the output shape will be a vector of length depth
If indices
is a vector of length features
, the output shape will be:
features x depth if axis == -1
depth x features if axis == 0
If indices
is a matrix (batch) with shape [batch, features]
, the output
shape will be:
batch x features x depth if axis == -1
batch x depth x features if axis == 1
depth x batch x features if axis == 0
If dtype
is not provided, it will attempt to assume the data type of
on_value
or off_value
, if one or both are passed in. If none of
on_value
, off_value
, or dtype
are provided, dtype
will default to the
value tf.float32
Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.),
both on_value
and off_value
must be provided to one_hot
Examples
Suppose that
indices = [0, 2, -1, 1]
depth = 3
on_value = 5.0
off_value = 0.0
axis = -1
Then output is [4 x 3]
:
output =
[5.0 0.0 0.0] // one_hot(0)
[0.0 0.0 5.0] // one_hot(2)
[0.0 0.0 0.0] // one_hot(-1)
[0.0 5.0 0.0] // one_hot(1)
Suppose that
indices = [[0, 2], [1, -1]]
depth = 3
on_value = 1.0
off_value = 0.0
axis = -1
Then output is [2 x 2 x 3]
:
output =
[
[1.0, 0.0, 0.0] // one_hot(0)
[0.0, 0.0, 1.0] // one_hot(2)
][
[0.0, 1.0, 0.0] // one_hot(1)
[0.0, 0.0, 0.0] // one_hot(-1)
]
Using default values for on_value
and off_value
:
indices = [0, 1, 2]
depth = 3
The output will be
output =
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]
Args:
indices: A Tensor
of indices.
depth: A scalar defining the depth of the one hot dimension.
on_value: A scalar defining the value to fill in output when indices[j]
= i
. (default: 1)
off_value: A scalar defining the value to fill in output when indices[j]
!= i
. (default: 0)
axis: The axis to fill (default: -1, a new inner-most axis).
dtype: The data type of the output tensor.
Returns: output: The one-hot tensor.
Raises:
TypeError: If dtype of either on_value
or off_value
don't match dtype
TypeError: If dtype of on_value
and off_value
don't match one another
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ones(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ones(*args, **kwargs)
It accepts the same arguments as tensorflow.ones
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ones(x1, *args, **kwargs)
is equivalent to
builder.ones(*args, **kwargs)(x1)
tensorflow.ones
Creates a tensor with all elements set to 1.
This operation returns a tensor of type dtype
with shape shape
and all
elements set to 1.
For example:
python
tf.ones([2, 3], tf.int32) ==> [[1, 1, 1], [1, 1, 1]]
Args:
shape: Either a list of integers, or a 1-D Tensor
of type int32
.
dtype: The type of an element in the resulting Tensor
.
name: A name for the operation (optional).
Returns:
A Tensor
with all elements set to 1.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ones_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ones_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.ones_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ones_initializer(x1, *args, **kwargs)
is equivalent to
builder.ones_initializer(*args, **kwargs)(x1)
tensorflow.ones_initializer
An adaptor for ones() to match the Initializer spec.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def ones_like(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.ones_like(*args, **kwargs)
It accepts the same arguments as tensorflow.ones_like
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.ones_like(x1, *args, **kwargs)
is equivalent to
builder.ones_like(*args, **kwargs)(x1)
tensorflow.ones_like
Creates a tensor with all elements set to 1.
Given a single tensor (tensor
), this operation returns a tensor of the same
type and shape as tensor
with all elements set to 1. Optionally, you can
specify a new type (dtype
) for the returned tensor.
For example:
```python
'tensor' is [[1, 2, 3], [4, 5, 6]]
tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]] ```
Args:
tensor: A Tensor
.
dtype: A type for the returned Tensor
. Must be float32
, float64
,
int8
, int16
, int32
, int64
, uint8
, complex64
, complex128
or
bool
.
name: A name for the operation (optional).
optimize: if true, attempt to statically determine the shape of 'tensor'
and encode it as a constant.
Returns:
A Tensor
with all elements set to 1.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def op_scope(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.op_scope(*args, **kwargs)
It accepts the same arguments as tensorflow.op_scope
.
However, a partial with the arguments is returned which expects any argument x
and complete ignores it, such that
tensorflow.op_scope(*args, **kwargs)
is equivalent to
builder.op_scope(*args, **kwargs)(x)
tensorflow.op_scope
DEPRECATED. Same as name_scope above, just different argument order.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs)
def pack(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.pack(*args, **kwargs)
It accepts the same arguments as tensorflow.pack
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.pack(x1, *args, **kwargs)
is equivalent to
builder.pack(*args, **kwargs)(x1)
tensorflow.pack
Packs a list of rank-`R` tensors into one rank-`(R+1)` tensor.
Packs the list of tensors in values
into a tensor with rank one higher than
each tensor in values
, by packing them along the axis
dimension.
Given a list of length N
of tensors of shape (A, B, C)
;
if axis == 0
then the output
tensor will have the shape (N, A, B, C)
.
if axis == 1
then the output
tensor will have the shape (A, N, B, C)
.
Etc.
For example:
```prettyprint
'x' is [1, 4]
'y' is [2, 5]
'z' is [3, 6]
pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]] ```
This is the opposite of unpack. The numpy equivalent is
tf.pack([x, y, z]) = np.asarray([x, y, z])
Args:
values: A list of Tensor
objects with the same shape and type.
axis: An int
. The axis to pack along. Defaults to the first dimension.
Supports negative indexes.
name: A name for this operation (optional).
Returns:
output: A packed Tensor
with the same type as values
.
Raises:
ValueError: If axis
is out of the range [-(R+1), R+1).
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def pad(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.pad(*args, **kwargs)
It accepts the same arguments as tensorflow.pad
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.pad(x1, *args, **kwargs)
is equivalent to
builder.pad(*args, **kwargs)(x1)
tensorflow.pad
Pads a tensor.
This operation pads a tensor
according to the paddings
you specify.
paddings
is an integer tensor with shape [n, 2]
, where n is the rank of
tensor
. For each dimension D of input
, paddings[D, 0]
indicates how
many values to add before the contents of tensor
in that dimension, and
paddings[D, 1]
indicates how many values to add after the contents of
tensor
in that dimension. If mode
is "REFLECT" then both paddings[D, 0]
and paddings[D, 1]
must be no greater than tensor.dim_size(D) - 1
. If
mode
is "SYMMETRIC" then both paddings[D, 0]
and paddings[D, 1]
must be
no greater than tensor.dim_size(D)
.
The padded size of each dimension D of the output is:
paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]
For example:
```python
't' is [[1, 2, 3], [4, 5, 6]].
'paddings' is [[1, 1,], [2, 2]].
rank of 't' is 2.
pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 2, 3, 0, 0], [0, 0, 4, 5, 6, 0, 0], [0, 0, 0, 0, 0, 0, 0]]
pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4], [3, 2, 1, 2, 3, 2, 1], [6, 5, 4, 5, 6, 5, 4], [3, 2, 1, 2, 3, 2, 1]]
pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2], [2, 1, 1, 2, 3, 3, 2], [5, 4, 4, 5, 6, 6, 5], [5, 4, 4, 5, 6, 6, 5]] ```
Args:
tensor: A Tensor
.
paddings: A Tensor
of type int32
.
mode: One of "CONSTANT", "REFLECT", or "SYMMETRIC".
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as tensor
.
Raises: ValueError: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC".
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def parse_example(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.parse_example(*args, **kwargs)
It accepts the same arguments as tensorflow.parse_example
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.parse_example(x1, *args, **kwargs)
is equivalent to
builder.parse_example(*args, **kwargs)(x1)
tensorflow.parse_example
Parses `Example` protos into a `dict` of tensors.
Parses a number of serialized [Example
]
(https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
protos given in serialized
.
example_names
may contain descriptive names for the corresponding serialized
protos. These may be useful for debugging purposes, but they have no effect on
the output. If not None
, example_names
must be the same length as serialized
.
This op parses serialized examples into a dictionary mapping keys to Tensor
and SparseTensor
objects. features
is a dict from keys to VarLenFeature
and FixedLenFeature
objects. Each VarLenFeature
is mapped to a
SparseTensor
, and each FixedLenFeature
is mapped to a Tensor
.
Each VarLenFeature
maps to a SparseTensor
of the specified type
representing a ragged matrix. Its indices are [batch, index]
where batch
is the batch entry the value is from in serialized
, and index
is the
value's index in the list of values associated with that feature and example.
Each FixedLenFeature
df
maps to a Tensor
of the specified type (or
tf.float32
if not specified) and shape (serialized.size(),) + df.shape
.
FixedLenFeature
entries with a default_value
are optional. With no default
value, we will fail if that Feature
is missing from any example in
serialized
.
Examples:
For example, if one expects a tf.float32
sparse feature ft
and three
serialized Example
s are provided:
serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
then the output will look like:
{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
values=[1.0, 2.0, 3.0],
shape=(3, 2)) }
Given two Example
input protos in serialized
:
[
features {
feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
feature { key: "gps" value { float_list { value: [] } } }
},
features {
feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
feature { key: "dank" value { int64_list { value: [ 42 ] } } }
feature { key: "gps" value { } }
}
]
And arguments
example_names: ["input0", "input1"],
features: {
"kw": VarLenFeature(tf.string),
"dank": VarLenFeature(tf.int64),
"gps": VarLenFeature(tf.float32),
}
Then the output is a dictionary:
python
{
"kw": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["knit", "big", "emmy"]
shape=[2, 2]),
"dank": SparseTensor(
indices=[[1, 0]],
values=[42],
shape=[2, 1]),
"gps": SparseTensor(
indices=[],
values=[],
shape=[2, 0]),
}
For dense results in two serialized Example
s:
[
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
}
]
We can use arguments:
example_names: ["input0", "input1"],
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
}
And the expected output is:
python
{
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
}
Args:
serialized: A vector (1-D Tensor) of strings, a batch of binary
serialized Example
protos.
features: A dict
mapping feature keys to FixedLenFeature
or
VarLenFeature
values.
name: A name for this operation (optional).
example_names: A vector (1-D Tensor) of strings (optional), the names of
the serialized protos in the batch.
Returns:
A dict
mapping feature keys to Tensor
and SparseTensor
values.
Raises: ValueError: if any feature is invalid.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def parse_single_example(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.parse_single_example(*args, **kwargs)
It accepts the same arguments as tensorflow.parse_single_example
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.parse_single_example(x1, *args, **kwargs)
is equivalent to
builder.parse_single_example(*args, **kwargs)(x1)
tensorflow.parse_single_example
Parses a single `Example` proto.
Similar to parse_example
, except:
For dense tensors, the returned Tensor
is identical to the output of
parse_example
, except there is no batch dimension, the output shape is the
same as the shape given in dense_shape
.
For SparseTensor
s, the first (batch) column of the indices matrix is removed
(the indices matrix is a column vector), the values vector is unchanged, and
the first (batch_size
) entry of the shape vector is removed (it is now a
single element vector).
Args:
serialized: A scalar string Tensor, a single serialized Example.
See _parse_single_example_raw
documentation for more details.
features: A dict
mapping feature keys to FixedLenFeature
or
VarLenFeature
values.
name: A name for this operation (optional).
example_names: (Optional) A scalar string Tensor, the associated name.
See _parse_single_example_raw
documentation for more details.
Returns:
A dict
mapping feature keys to Tensor
and SparseTensor
values.
Raises: ValueError: if any feature is invalid.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def parse_single_sequence_example(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.parse_single_sequence_example(*args, **kwargs)
It accepts the same arguments as tensorflow.parse_single_sequence_example
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.parse_single_sequence_example(x1, *args, **kwargs)
is equivalent to
builder.parse_single_sequence_example(*args, **kwargs)(x1)
tensorflow.parse_single_sequence_example
Parses a single `SequenceExample` proto.
Parses a single serialized [SequenceExample
]
(https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
proto given in serialized
.
This op parses a serialize sequence example into a tuple of dictionaries
mapping keys to Tensor
and SparseTensor
objects respectively.
The first dictionary contains mappings for keys appearing in
context_features
, and the second dictionary contains mappings for keys
appearing in sequence_features
.
At least one of context_features
and sequence_features
must be provided
and non-empty.
The context_features
keys are associated with a SequenceExample
as a
whole, independent of time / frame. In contrast, the sequence_features
keys
provide a way to access variable-length data within the FeatureList
section
of the SequenceExample
proto. While the shapes of context_features
values
are fixed with respect to frame, the frame dimension (the first dimension)
of sequence_features
values may vary between SequenceExample
protos,
and even between feature_list
keys within the same SequenceExample
.
context_features
contains VarLenFeature
and FixedLenFeature
objects.
Each VarLenFeature
is mapped to a SparseTensor
, and each FixedLenFeature
is mapped to a Tensor
, of the specified type, shape, and default value.
sequence_features
contains VarLenFeature
and FixedLenSequenceFeature
objects. Each VarLenFeature
is mapped to a SparseTensor
, and each
FixedLenSequenceFeature
is mapped to a Tensor
, each of the specified type.
The shape will be (T,) + df.shape
for FixedLenSequenceFeature
df
, where
T
is the length of the associated FeatureList
in the SequenceExample
.
For instance, FixedLenSequenceFeature([])
yields a scalar 1-D Tensor
of
static shape [None]
and dynamic shape [T]
, while
FixedLenSequenceFeature([k])
(for int k >= 1
) yields a 2-D matrix Tensor
of static shape [None, k]
and dynamic shape [T, k]
.
Each SparseTensor
corresponding to sequence_features
represents a ragged
vector. Its indices are [time, index]
, where time
is the FeatureList
entry and index
is the value's index in the list of values associated with
that time.
FixedLenFeature
entries with a default_value
and FixedLenSequenceFeature
entries with allow_missing=True
are optional; otherwise, we will fail if
that Feature
or FeatureList
is missing from any example in serialized
.
example_name
may contain a descriptive name for the corresponding serialized
proto. This may be useful for debugging purposes, but it has no effect on the
output. If not None
, example_name
must be a scalar.
Args:
serialized: A scalar (0-D Tensor) of type string, a single binary
serialized SequenceExample
proto.
context_features: A dict
mapping feature keys to FixedLenFeature
or
VarLenFeature
values. These features are associated with a
SequenceExample
as a whole.
sequence_features: A dict
mapping feature keys to
FixedLenSequenceFeature
or VarLenFeature
values. These features are
associated with data within the FeatureList
section of the
SequenceExample
proto.
example_name: A scalar (0-D Tensor) of strings (optional), the name of
the serialized proto.
name: A name for this operation (optional).
Returns:
A tuple of two dict
s, each mapping keys to Tensor
s and SparseTensor
s.
The first dict contains the context key/values.
The second dict contains the feature_list key/values.
Raises: ValueError: if any feature is invalid.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def parse_tensor(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.parse_tensor(*args, **kwargs)
It accepts the same arguments as tensorflow.parse_tensor
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.parse_tensor(x1, *args, **kwargs)
is equivalent to
builder.parse_tensor(*args, **kwargs)(x1)
tensorflow.parse_tensor
Transforms a serialized tensorflow.TensorProto proto into a Tensor.
Args:
serialized: A Tensor
of type string
.
A scalar string containing a serialized TensorProto proto.
out_type: A tf.DType
.
The type of the serialized tensor. The provided type must match the
type of the serialized tensor and no implicit conversion will take place.
name: A name for the operation (optional).
Returns:
A Tensor
of type out_type
. A Tensor of type out_type
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def placeholder(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.placeholder(*args, **kwargs)
It accepts the same arguments as tensorflow.placeholder
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.placeholder(x1, *args, **kwargs)
is equivalent to
builder.placeholder(*args, **kwargs)(x1)
tensorflow.placeholder
Inserts a placeholder for a tensor that will be always fed.
Important: This tensor will produce an error if evaluated. Its value must
be fed using the feed_dict
optional argument to Session.run()
,
Tensor.eval()
, or Operation.run()
.
For example:
```python x = tf.placeholder(tf.float32, shape=(1024, 1024)) y = tf.matmul(x, x)
with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed.
rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. ```
Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional).
Returns:
A Tensor
that may be used as a handle for feeding a value, but not
evaluated directly.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def placeholder_with_default(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.placeholder_with_default(*args, **kwargs)
It accepts the same arguments as tensorflow.placeholder_with_default
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.placeholder_with_default(x1, *args, **kwargs)
is equivalent to
builder.placeholder_with_default(*args, **kwargs)(x1)
tensorflow.placeholder_with_default
A placeholder op that passes though `input` when its output is not fed.
Args:
input: A Tensor
. The default value to produce when output
is not fed.
shape: A tf.TensorShape
or list of ints
.
The (possibly partial) shape of the tensor.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
A placeholder tensor that defaults to input
if it is not fed.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def polygamma(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.polygamma(*args, **kwargs)
It accepts the same arguments as tensorflow.polygamma
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.polygamma(x1, *args, **kwargs)
is equivalent to
builder.polygamma(*args, **kwargs)(x1)
tensorflow.polygamma
Compute the polygamma function \\(\psi^{(n)}(x)\\).
The polygamma function is defined as:
\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x)
where \(\psi(x)\) is the digamma function.
Args:
a: A Tensor
. Must be one of the following types: float32
, float64
.
x: A Tensor
. Must have the same type as a
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def polynomial_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.polynomial_layer(*args, **kwargs)
It accepts the same arguments as tb.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tb.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.polynomial_layer(*args, **kwargs)(x1)
However, it uses an activation function of the form
y(i) = z(i)^(i+1)
where z = w*x + b
tb.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def pow(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.pow(*args, **kwargs)
It accepts the same arguments as tensorflow.pow
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.pow(x1, *args, **kwargs)
is equivalent to
builder.pow(*args, **kwargs)(x1)
tensorflow.pow
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
```
tensor 'x' is [[2, 2], [3, 3]]
tensor 'y' is [[8, 16], [2, 3]]
tf.pow(x, y) ==> [[256, 65536], [9, 27]] ```
Args:
x: A Tensor
of type float32
, float64
, int32
, int64
, complex64
,
or complex128
.
y: A Tensor
of type float32
, float64
, int32
, int64
, complex64
,
or complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def py_func(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.py_func(*args, **kwargs)
It accepts the same arguments as tensorflow.py_func
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.py_func(x1, *args, **kwargs)
is equivalent to
builder.py_func(*args, **kwargs)(x1)
tensorflow.py_func
Wraps a python function and uses it as a tensorflow op.
Given a python function func
, which takes numpy arrays as its
inputs and returns numpy arrays as its outputs. E.g.,
python
def my_func(x):
# x will be a numpy array with the contents of the placeholder below
return np.sinh(x)
inp = tf.placeholder(tf.float32, [...])
y = py_func(my_func, [inp], [tf.float32])
The above snippet constructs a tf graph which invokes a numpy sinh(x) as an op in the graph.
Args:
func: A python function.
inp: A list of Tensor
.
Tout: A list or tuple of tensorflow data types or a single tensorflow data
type if there is only one, indicating what func
returns.
stateful: A boolean indicating whether the function should be considered
stateful or stateless. I.e. whether it, given the same input, will
return the same output and at the same time does not change state
in an observable way. Optimizations such as common subexpression
elimination are only possible when operations are stateless.
name: A name for the operation (optional).
Returns:
A list of Tensor
or a single Tensor
which func
computes.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_crop(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_crop(*args, **kwargs)
It accepts the same arguments as tensorflow.random_crop
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_crop(x1, *args, **kwargs)
is equivalent to
builder.random_crop(*args, **kwargs)(x1)
tensorflow.random_crop
Randomly crops a tensor to a given size.
Slices a shape size
portion out of value
at a uniformly chosen offset.
Requires value.shape >= size
.
If a dimension should not be cropped, pass the full size of that dimension.
For example, RGB images can be cropped with
size = [crop_height, crop_width, 3]
.
Args:
value: Input tensor to crop.
size: 1-D tensor with size the rank of value
.
seed: Python integer. Used to create a random seed. See
set_random_seed
for behavior.
name: A name for this operation (optional).
Returns:
A cropped tensor of the same rank as value
and shape size
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_gamma(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_gamma(*args, **kwargs)
It accepts the same arguments as tensorflow.random_gamma
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_gamma(x1, *args, **kwargs)
is equivalent to
builder.random_gamma(*args, **kwargs)(x1)
tensorflow.random_gamma
Draws `shape` samples from each of the given Gamma distribution(s).
alpha
is the shape parameter describing the distribution(s), and beta
is
the inverse scale parameter(s).
Example:
samples = tf.random_gamma([10], [0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution
samples = tf.random_gamma([7, 5], [0.5, 1.5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions
samples = tf.random_gamma([30], [[1.],[3.],[5.]], beta=[[3., 4.]]) # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.
Note that for small alpha values, there is a chance you will draw a value of exactly 0, which gets worse for lower-precision dtypes, even though zero is not in the support of the gamma distribution.
Relevant cdfs (~chance you will draw a exactly-0 value):
stats.gamma(.01).cdf(np.finfo(np.float16).tiny)
0.91269738769897879
stats.gamma(.01).cdf(np.finfo(np.float32).tiny)
0.41992668622045726
stats.gamma(.01).cdf(np.finfo(np.float64).tiny)
0.00084322740680686662
stats.gamma(.35).cdf(np.finfo(np.float16).tiny)
0.037583276135263931
stats.gamma(.35).cdf(np.finfo(np.float32).tiny)
5.9514895726818067e-14
stats.gamma(.35).cdf(np.finfo(np.float64).tiny)
2.3529843400647272e-108
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output samples
to be drawn per alpha/beta-parameterized distribution.
alpha: A Tensor or Python value or N-D array of type dtype
. alpha
provides the shape parameter(s) describing the gamma distribution(s) to
sample. Must be broadcastable with beta
.
beta: A Tensor or Python value or N-D array of type dtype
. Defaults to 1.
beta
provides the inverse scale parameter(s) of the gamma
distribution(s) to sample. Must be broadcastable with alpha
.
dtype: The type of alpha, beta, and the output: float16
, float32
, or
float64
.
seed: A Python integer. Used to create a random seed for the distributions.
See
set_random_seed
for behavior.
name: Optional name for the operation.
Returns:
samples: a Tensor
of shape tf.concat(shape, tf.shape(alpha + beta))
with
values of type dtype
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_normal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_normal(*args, **kwargs)
It accepts the same arguments as tensorflow.random_normal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_normal(x1, *args, **kwargs)
is equivalent to
builder.random_normal(*args, **kwargs)(x1)
tensorflow.random_normal
Outputs random values from a normal distribution.
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype
. The mean of the normal
distribution.
stddev: A 0-D Tensor or Python value of type dtype
. The standard deviation
of the normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution.
See
set_random_seed
for behavior.
name: A name for the operation (optional).
Returns: A tensor of the specified shape filled with random normal values.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_normal_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_normal_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.random_normal_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_normal_initializer(x1, *args, **kwargs)
is equivalent to
builder.random_normal_initializer(*args, **kwargs)(x1)
tensorflow.random_normal_initializer
Returns an initializer that generates tensors with a normal distribution.
Args:
mean: a python scalar or a scalar tensor. Mean of the random values
to generate.
stddev: a python scalar or a scalar tensor. Standard deviation of the
random values to generate.
seed: A Python integer. Used to create random seeds. See
set_random_seed
for behavior.
dtype: The data type. Only floating point types are supported.
Returns: An initializer that generates tensors with a normal distribution.
Raises:
ValueError: if dtype
is not a floating point type.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_shuffle(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_shuffle(*args, **kwargs)
It accepts the same arguments as tensorflow.random_shuffle
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_shuffle(x1, *args, **kwargs)
is equivalent to
builder.random_shuffle(*args, **kwargs)(x1)
tensorflow.random_shuffle
Randomly shuffles a tensor along its first dimension.
The tensor is shuffled along dimension 0, such that each value[j]
is mapped
to one and only one output[i]
. For example, a mapping that might occur for a
3x2 tensor is:
python
[[1, 2], [[5, 6],
[3, 4], ==> [1, 2],
[5, 6]] [3, 4]]
Args:
value: A Tensor to be shuffled.
seed: A Python integer. Used to create a random seed for the distribution.
See
set_random_seed
for behavior.
name: A name for the operation (optional).
Returns:
A tensor of same shape and type as value
, shuffled along its first
dimension.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_uniform(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_uniform(*args, **kwargs)
It accepts the same arguments as tensorflow.random_uniform
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_uniform(x1, *args, **kwargs)
is equivalent to
builder.random_uniform(*args, **kwargs)(x1)
tensorflow.random_uniform
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range
[minval, maxval)
. The lower bound minval
is included in the range, while
the upper bound maxval
is excluded.
For floats, the default range is [0, 1)
. For ints, at least maxval
must
be specified explicitly.
In the integer case, the random integers are slightly biased unless
maxval - minval
is an exact power of two. The bias is small for values of
maxval - minval
significantly smaller than the range of the output (either
2**32
or 2**64
).
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
minval: A 0-D Tensor or Python value of type dtype
. The lower bound on the
range of random values to generate. Defaults to 0.
maxval: A 0-D Tensor or Python value of type dtype
. The upper bound on
the range of random values to generate. Defaults to 1 if dtype
is
floating point.
dtype: The type of the output: float32
, float64
, int32
, or int64
.
seed: A Python integer. Used to create a random seed for the distribution.
See
set_random_seed
for behavior.
name: A name for the operation (optional).
Returns: A tensor of the specified shape filled with random uniform values.
Raises:
ValueError: If dtype
is integral and maxval
is not specified.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def random_uniform_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.random_uniform_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.random_uniform_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.random_uniform_initializer(x1, *args, **kwargs)
is equivalent to
builder.random_uniform_initializer(*args, **kwargs)(x1)
tensorflow.random_uniform_initializer
Returns an initializer that generates tensors with a uniform distribution.
Args:
minval: A python scalar or a scalar tensor. Lower bound of the range
of random values to generate.
maxval: A python scalar or a scalar tensor. Upper bound of the range
of random values to generate. Defaults to 1 for float types.
seed: A Python integer. Used to create random seeds. See
set_random_seed
for behavior.
dtype: The data type.
Returns: An initializer that generates tensors with a uniform distribution.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def range(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.range(*args, **kwargs)
It accepts the same arguments as tensorflow.range
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.range(x1, *args, **kwargs)
is equivalent to
builder.range(*args, **kwargs)(x1)
tensorflow.range
Creates a sequence of integers.
Creates a sequence of integers that begins at start
and extends by
increments of delta
up to but not including limit
.
Like the Python builtin range
, start
defaults to 0, so that
range(n) = range(0, n)
.
For example:
```
'start' is 3
'limit' is 18
'delta' is 3
tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
'limit' is 5
tf.range(limit) ==> [0, 1, 2, 3, 4] ```
Args:
start: A 0-D (scalar) of type int32
. Acts as first entry in the range if
limit
is not None; otherwise, acts as range limit and first entry
defaults to 0.
limit: A 0-D (scalar) of type int32
. Upper limit of sequence,
exclusive. If None, defaults to the value of start
while the first
entry of the range defaults to 0.
delta: A 0-D Tensor
(scalar) of type int32
. Number that increments
start
. Defaults to 1.
name: A name for the operation. Defaults to "range".
Returns:
An 1-D int32
Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def rank(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rank(*args, **kwargs)
It accepts the same arguments as tensorflow.rank
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.rank(x1, *args, **kwargs)
is equivalent to
builder.rank(*args, **kwargs)(x1)
tensorflow.rank
Returns the rank of a tensor.
This operation returns an integer representing the rank of input
.
For example:
```python
't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape of tensor 't' is [2, 2, 3]
rank(t) ==> 3 ```
Note: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as "order", "degree", or "ndims."
Args:
input: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
Returns:
A Tensor
of type int32
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def raw_rnn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.raw_rnn(*args, **kwargs)
It accepts the same arguments as tf.nn.raw_rnn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.raw_rnn(x1, *args, **kwargs)
is equivalent to
builder.raw_rnn(*args, **kwargs)(x1)
tf.nn.raw_rnn
Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.
NOTE: This method is still in testing, and the API may change.
This function is a more primitive version of dynamic_rnn
that provides
more direct access to the inputs each iteration. It also provides more
control over when to start and finish reading the sequence, and
what to emit for the output.
For example, it can be used to implement the dynamic decoder of a seq2seq model.
Instead of working with Tensor
objects, most operations work with
TensorArray
objects directly.
The operation of raw_rnn
, in pseudo-code, is basically the following:
time = tf.constant(0, dtype=tf.int32)
(finished, next_input, initial_state, _, loop_state) = loop_fn(
time=time, cell_output=None, cell_state=None, loop_state=None)
emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)
state = initial_state
while not all(finished):
(output, cell_state) = cell(next_input, state)
(next_finished, next_input, next_state, emit, loop_state) = loop_fn(
time=time + 1, cell_output=output, cell_state=cell_state,
loop_state=loop_state)
# Emit zeros and copy forward state for minibatch entries that are finished.
state = tf.select(finished, state, next_state)
emit = tf.select(finished, tf.zeros_like(emit), emit)
emit_ta = emit_ta.write(time, emit)
# If any new minibatch entries are marked as finished, mark these
finished = tf.logical_or(finished, next_finished)
time += 1
return (emit_ta, state, loop_state)
with the additional properties that output and state may be (possibly nested)
tuples, as determined by cell.output_size
and cell.state_size
, and
as a result the final state
and emit_ta
may themselves be tuples.
A simple implementation of dynamic_rnn
via raw_rnn
looks like this:
```python inputs = tf.placeholder(shape=(max_time, batch_size, input_depth), dtype=tf.float32) sequence_length = tf.placeholder(shape=(batch_size,), dtype=tf.int32) inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time) inputs_ta = inputs_ta.unpack(inputs)
cell = tf.nn.rnn_cell.LSTMCell(num_units)
def loop_fn(time, cell_output, cell_state, loop_state): emit_output = cell_output # == None for time == 0 if cell_output is None: # time == 0 next_cell_state = cell.zero_state(batch_size, tf.float32) else: next_cell_state = cell_state elements_finished = (time >= sequence_length) finished = tf.reduce_all(elements_finished) next_input = tf.cond( finished, lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32), lambda: inputs_ta.read(time)) next_loop_state = None return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)
outputs_ta, final_state, _ = raw_rnn(cell, loop_fn) outputs = outputs_ta.pack() ```
Args:
cell: An instance of RNNCell.
loop_fn: A callable that takes inputs
(time, cell_output, cell_state, loop_state)
and returns the tuple
(finished, next_input, next_cell_state, emit_output, next_loop_state)
.
Here time
is an int32 scalar Tensor
, cell_output
is a
Tensor
or (possibly nested) tuple of tensors as determined by
cell.output_size
, and cell_state
is a Tensor
or (possibly nested) tuple of tensors, as determined by the loop_fn
on its first call (and should match cell.state_size
).
The outputs are: finished
, a boolean Tensor
of
shape [batch_size]
, next_input
: the next input to feed to cell
,
next_cell_state
: the next state to feed to cell
,
and emit_output
: the output to store for this iteration.
Note that `emit_output` should be a `Tensor` or (possibly nested) tuple of tensors with shapes and structure matching `cell.output_size` and `cell_output` above. The parameter `cell_state` and output `next_cell_state` may be either a single or (possibly nested) tuple of tensors. The parameter `loop_state` and output `next_loop_state` may be either a single or (possibly nested) tuple of `Tensor` and `TensorArray` objects. This last parameter may be ignored by `loop_fn` and the return value may be `None`. If it is not `None`, then the `loop_state` will be propagated through the RNN loop, for use purely by `loop_fn` to keep track of its own state. The `next_loop_state` parameter returned may be `None`. The first call to `loop_fn` will be `time = 0`, `cell_output = None`, `cell_state = None`, and `loop_state = None`. For this call: The `next_cell_state` value should be the value with which to initialize the cell's state. It may be a final state from a previous RNN or it may be the output of `cell.zero_state()`. It should be a (possibly nested) tuple structure of tensors. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of appropriate type and shape `[batch_size] + cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. The `emit_output` value may be either `None` or a (possibly nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first `emit_output` return value is `None`, then the `emit_ta` result of `raw_rnn` will have the same structure and dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same structure, shapes (prepended with a `batch_size` dimension), and dtypes as `emit_output`. The actual values returned for `emit_output` at this initializing call are ignored. Note, this emit structure must be consistent across all time steps.
parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. scope: VariableScope for the created subgraph; defaults to "RNN".
Returns:
A tuple (emit_ta, final_state, final_loop_state)
where:
emit_ta
: The RNN output TensorArray
.
If loop_fn
returns a (possibly nested) set of Tensors for
emit_output
during initialization, (inputs time = 0
,
cell_output = None
, and loop_state = None
), then emit_ta
will
have the same structure, dtypes, and shapes as emit_output
instead.
If loop_fn
returns emit_output = None
during this call,
the structure of cell.output_size
is used:
If cell.output_size
is a (possibly nested) tuple of integers
or TensorShape
objects, then emit_ta
will be a tuple having the
same structure as cell.output_size
, containing TensorArrays whose
elements' shapes correspond to the shape data in cell.output_size
.
final_state
: The final cell state. If cell.state_size
is an int, this
will be shaped [batch_size, cell.state_size]
. If it is a
TensorShape
, this will be shaped [batch_size] + cell.state_size
.
If it is a (possibly nested) tuple of ints or TensorShape
, this will
be a tuple having the corresponding shapes.
final_loop_state
: The final loop state as returned by loop_fn
.
Raises:
TypeError: If cell
is not an instance of RNNCell, or loop_fn
is not
a callable
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def raw_rnn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.raw_rnn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.raw_rnn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.raw_rnn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def raw_rnn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.raw_rnn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.raw_rnn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.raw_rnn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def read_file(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.read_file(*args, **kwargs)
It accepts the same arguments as tensorflow.read_file
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.read_file(x1, *args, **kwargs)
is equivalent to
builder.read_file(*args, **kwargs)(x1)
tensorflow.read_file
Reads and outputs the entire contents of the input filename.
Args:
filename: A Tensor
of type string
.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def real(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.real(*args, **kwargs)
It accepts the same arguments as tensorflow.real
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.real(x1, *args, **kwargs)
is equivalent to
builder.real(*args, **kwargs)(x1)
tensorflow.real
Returns the real part of a complex number.
Given a tensor input
of complex numbers, this operation returns a tensor of
type float32
or float64
that is the real part of each element in input
.
All elements in input
must be complex numbers of the form (a + bj),
where a is the real part returned by this operation and b is the
imaginary part.
For example:
```
tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.real(input) ==> [-2.25, 3.25] ```
If input
is already real, it is returned unchanged.
Args:
input: A Tensor
. Must have numeric type.
name: A name for the operation (optional).
Returns:
A Tensor
of type float32
or float64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_all(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_all(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_all
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_all(x1, *args, **kwargs)
is equivalent to
builder.reduce_all(*args, **kwargs)(x1)
tensorflow.reduce_all
Computes the "logical and" of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
```python
'x' is [[True, True]
[False, False]]
tf.reduce_all(x) ==> False tf.reduce_all(x, 0) ==> [False, False] tf.reduce_all(x, 1) ==> [True, False] ```
Args:
input_tensor: The boolean tensor to reduce.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_any(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_any(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_any
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_any(x1, *args, **kwargs)
is equivalent to
builder.reduce_any(*args, **kwargs)(x1)
tensorflow.reduce_any
Computes the "logical or" of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
```python
'x' is [[True, True]
[False, False]]
tf.reduce_any(x) ==> True tf.reduce_any(x, 0) ==> [True, True] tf.reduce_any(x, 1) ==> [True, False] ```
Args:
input_tensor: The boolean tensor to reduce.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_join(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_join(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_join
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_join(x1, *args, **kwargs)
is equivalent to
builder.reduce_join(*args, **kwargs)(x1)
tensorflow.reduce_join
Joins a string Tensor across the given dimensions.
Computes the string join across dimensions in the given string Tensor of shape
[d_0, d_1, ..., d_n-1]
. Returns a new Tensor created by joining the input
strings with the given separator (default: empty string). Negative indices are
counted backwards from the end, with -1
being equivalent to n - 1
. Passing
an empty reduction_indices
joins all strings in linear index order and outputs
a scalar string.
For example:
```
tensor a
is [["a", "b"], ["c", "d"]]
tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] tf.reduce_join(a, [0, 1]) ==> ["acbd"] tf.reduce_join(a, [1, 0]) ==> ["abcd"] tf.reduce_join(a, []) ==> ["abcd"] ```
Args:
inputs: A Tensor
of type string
.
The input to be joined. All reduced indices must have non-zero size.
reduction_indices: A Tensor
of type int32
.
The dimensions to reduce over. Dimensions are reduced in the
order specified. Omitting reduction_indices
is equivalent to passing
[n-1, n-2, ..., 0]
. Negative indices from -n
to -1
are supported.
keep_dims: An optional bool
. Defaults to False
.
If True
, retain reduced dimensions with length 1
.
separator: An optional string
. Defaults to ""
.
The separator to use when joining.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
.
Has shape equal to that of the input with reduced dimensions removed or
set to 1
depending on keep_dims
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_logsumexp(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_logsumexp(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_logsumexp
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_logsumexp(x1, *args, **kwargs)
is equivalent to
builder.reduce_logsumexp(*args, **kwargs)(x1)
tensorflow.reduce_logsumexp
Computes log(sum(exp(elements across dimensions of a tensor))).
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
This funciton is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs.
For example:
```python
'x' is [[0, 0, 0]]
[0, 0, 0]]
tf.reduce_logsumexp(x) ==> log(6) tf.reduce_logsumexp(x, 0) ==> [log(2), log(2), log(2)] tf.reduce_logsumexp(x, 1) ==> [log(3), log(3)] tf.reduce_logsumexp(x, 1, keep_dims=True) ==> [[log(3)], [log(3)]] tf.reduce_logsumexp(x, [0, 1]) ==> log(6) ```
Args:
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If None
(the defaut),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_max(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_max(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_max
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_max(x1, *args, **kwargs)
is equivalent to
builder.reduce_max(*args, **kwargs)(x1)
tensorflow.reduce_max
Computes the maximum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
Args:
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_mean(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_mean(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_mean
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_mean(x1, *args, **kwargs)
is equivalent to
builder.reduce_mean(*args, **kwargs)(x1)
tensorflow.reduce_mean
Computes the mean of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
```python
'x' is [[1., 1.]
[2., 2.]]
tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] ```
Args:
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_min(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_min(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_min
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_min(x1, *args, **kwargs)
is equivalent to
builder.reduce_min(*args, **kwargs)(x1)
tensorflow.reduce_min
Computes the minimum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
Args:
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_prod(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_prod(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_prod
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_prod(x1, *args, **kwargs)
is equivalent to
builder.reduce_prod(*args, **kwargs)(x1)
tensorflow.reduce_prod
Computes the product of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
Args:
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reduce_sum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reduce_sum(*args, **kwargs)
It accepts the same arguments as tensorflow.reduce_sum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reduce_sum(x1, *args, **kwargs)
is equivalent to
builder.reduce_sum(*args, **kwargs)(x1)
tensorflow.reduce_sum
Computes the sum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
```python
'x' is [[1, 1, 1]
[1, 1, 1]]
tf.reduce_sum(x) ==> 6 tf.reduce_sum(x, 0) ==> [2, 2, 2] tf.reduce_sum(x, 1) ==> [3, 3] tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]] tf.reduce_sum(x, [0, 1]) ==> 6 ```
Args:
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If None
(the default),
reduces all dimensions.
keep_dims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns: The reduced tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def register_tensor_conversion_function(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.register_tensor_conversion_function(*args, **kwargs)
It accepts the same arguments as tensorflow.register_tensor_conversion_function
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.register_tensor_conversion_function(x1, *args, **kwargs)
is equivalent to
builder.register_tensor_conversion_function(*args, **kwargs)(x1)
tensorflow.register_tensor_conversion_function
Registers a function for converting objects of `base_type` to `Tensor`.
The conversion function must have the following signature:
def conversion_func(value, dtype=None, name=None, as_ref=False): # ...
It must return a Tensor
with the given dtype
if specified. If the
conversion function creates a new Tensor
, it should use the given
name
if specified. All exceptions will be propagated to the caller.
The conversion function may return NotImplemented
for some
inputs. In this case, the conversion process will continue to try
subsequent conversion functions.
If as_ref
is true, the function must return a Tensor
reference,
such as a Variable
.
NOTE: The conversion functions will execute in order of priority,
followed by order of registration. To ensure that a conversion function
F
runs before another conversion function G
, ensure that F
is
registered with a smaller priority than G
.
Args:
base_type: The base type or tuple of base types for all objects that
conversion_func
accepts.
conversion_func: A function that converts instances of base_type
to
Tensor
.
priority: Optional integer that indicates the priority for applying this
conversion function. Conversion functions with smaller priority values
run earlier than conversion functions with larger priority values.
Defaults to 100.
Raises: TypeError: If the arguments do not have the appropriate type.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def relu(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.relu(*args, **kwargs)
It accepts the same arguments as tf.nn.relu
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.relu(x1, *args, **kwargs)
is equivalent to
builder.relu(*args, **kwargs)(x1)
tf.nn.relu
Computes rectified linear: `max(features, 0)`.
Args:
features: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as features
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def relu6(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.relu6(*args, **kwargs)
It accepts the same arguments as tf.nn.relu6
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.relu6(x1, *args, **kwargs)
is equivalent to
builder.relu6(*args, **kwargs)(x1)
tf.nn.relu6
Computes Rectified Linear 6: `min(max(features, 0), 6)`.
Args:
features: A Tensor
with type float
, double
, int32
, int64
, uint8
,
int16
, or int8
.
name: A name for the operation (optional).
Returns:
A Tensor
with the same type as features
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def relu6_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.relu6_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.relu6_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.relu6`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def relu6_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.relu6_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.relu6_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.relu6`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def relu_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.relu_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.relu_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.relu`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def relu_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.relu_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.relu_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.relu`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def report_uninitialized_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.report_uninitialized_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.report_uninitialized_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.report_uninitialized_variables(x1, *args, **kwargs)
is equivalent to
builder.report_uninitialized_variables(*args, **kwargs)(x1)
tensorflow.report_uninitialized_variables
Adds ops to list the names of uninitialized variables.
When run, it returns a 1-D tensor containing the names of uninitialized variables if there are any, or an empty array if there are none.
Args:
var_list: List of Variable
objects to check. Defaults to the
value of all_variables() + local_variables()
name: Optional name of the Operation
.
Returns: A 1-D tensor containing names of the uninitialized variables, or an empty 1-D tensor if there are no variables or no uninitialized variables.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def required_space_to_batch_paddings(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.required_space_to_batch_paddings(*args, **kwargs)
It accepts the same arguments as tensorflow.required_space_to_batch_paddings
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.required_space_to_batch_paddings(x1, *args, **kwargs)
is equivalent to
builder.required_space_to_batch_paddings(*args, **kwargs)(x1)
tensorflow.required_space_to_batch_paddings
Calculate padding required to make block_shape divide input_shape.
This function can be used to calculate a suitable paddings argument for use with space_to_batch_nd and batch_to_space_nd.
Args: input_shape: int32 Tensor of shape [N]. block_shape: int32 Tensor of shape [N]. base_paddings: Optional int32 Tensor of shape [N, 2]. Specifies the minimum amount of padding to use. All elements must be >= 0. If not specified, defaults to 0. name: string. Optional name prefix.
Returns: (paddings, crops), where:
paddings
and crops
are int32 Tensors of rank 2 and shape [N, 2]
satisfying:
paddings[i, 0] = base_paddings[i, 0]. 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i] (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0 crops[i, 0] = 0 crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]
Raises: ValueError if called with incompatible shapes.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reset_default_graph(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reset_default_graph(*args, **kwargs)
It accepts the same arguments as tensorflow.reset_default_graph
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reset_default_graph(x1, *args, **kwargs)
is equivalent to
builder.reset_default_graph(*args, **kwargs)(x1)
tensorflow.reset_default_graph
Clears the default graph stack and resets the global default graph.
NOTE: The default graph is a property of the current thread. This
function applies only to the current thread. Calling this function while
a tf.Session
or tf.InteractiveSession
is active will result in undefined
behavior. Using any previously created tf.Operation
or tf.Tensor
objects
after calling this function will result in undefined behavior.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reshape(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reshape(*args, **kwargs)
It accepts the same arguments as tensorflow.reshape
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reshape(x1, *args, **kwargs)
is equivalent to
builder.reshape(*args, **kwargs)(x1)
tensorflow.reshape
Reshapes a tensor.
Given tensor
, this operation returns a tensor that has the same values
as tensor
with shape shape
.
If one component of shape
is the special value -1, the size of that dimension
is computed so that the total size remains constant. In particular, a shape
of [-1]
flattens into 1-D. At most one component of shape
can be -1.
If shape
is 1-D or higher, then the operation returns a tensor with shape
shape
filled with the values of tensor
. In this case, the number of elements
implied by shape
must be the same as the number of elements in tensor
.
For example:
```prettyprint
tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
tensor 't' has shape [9]
reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
tensor 't' is [[[1, 1], [2, 2]],
[[3, 3], [4, 4]]]
tensor 't' has shape [2, 2, 2]
reshape(t, [2, 4]) ==> [[1, 1, 2, 2], [3, 3, 4, 4]]
tensor 't' is [[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]],
[[5, 5, 5],
[6, 6, 6]]]
tensor 't' has shape [3, 2, 3]
pass '[-1]' to flatten 't'
reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
-1 can also be used to infer the shape
-1 is inferred to be 9:
reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]]
-1 is inferred to be 2:
reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]]
-1 is inferred to be 3:
reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [6, 6, 6]]]
tensor 't' is [7]
shape []
reshapes to a scalar
reshape(t, []) ==> 7 ```
Args:
tensor: A Tensor
.
shape: A Tensor
. Must be one of the following types: int32
, int64
.
Defines the shape of the output tensor.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reverse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reverse(*args, **kwargs)
It accepts the same arguments as tensorflow.reverse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reverse(x1, *args, **kwargs)
is equivalent to
builder.reverse(*args, **kwargs)(x1)
tensorflow.reverse
Reverses specific dimensions of a tensor.
Given a tensor
, and a bool
tensor dims
representing the dimensions
of tensor
, this operation reverses each dimension i of tensor
where
dims[i]
is True
.
tensor
can have up to 8 dimensions. The number of dimensions
of tensor
must equal the number of elements in dims
. In other words:
rank(tensor) = size(dims)
For example:
```prettyprint
tensor 't' is [[[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]]]
tensor 't' shape is [1, 2, 3, 4]
'dims' is [False, False, False, True]
reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]]
'dims' is [False, True, False, False]
reverse(t, dims) ==> [[[[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23] [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]]]
'dims' is [False, False, True, False]
reverse(t, dims) ==> [[[[8, 9, 10, 11], [4, 5, 6, 7], [0, 1, 2, 3]] [[20, 21, 22, 23], [16, 17, 18, 19], [12, 13, 14, 15]]]] ```
Args:
tensor: A Tensor
. Must be one of the following types: uint8
, int8
, int32
, int64
, bool
, half
, float32
, float64
, complex64
, complex128
.
Up to 8-D.
dims: A Tensor
of type bool
. 1-D. The dimensions to reverse.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as tensor
. The same shape as tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def reverse_sequence(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.reverse_sequence(*args, **kwargs)
It accepts the same arguments as tensorflow.reverse_sequence
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.reverse_sequence(x1, *args, **kwargs)
is equivalent to
builder.reverse_sequence(*args, **kwargs)(x1)
tensorflow.reverse_sequence
Reverses variable length slices.
This op first slices input
along the dimension batch_dim
, and for each
slice i
, reverses the first seq_lengths[i]
elements along
the dimension seq_dim
.
The elements of seq_lengths
must obey seq_lengths[i] < input.dims[seq_dim]
,
and seq_lengths
must be a vector of length input.dims[batch_dim]
.
The output slice i
along dimension batch_dim
is then given by input
slice i
, with the first seq_lengths[i]
slices along dimension
seq_dim
reversed.
For example:
```prettyprint
Given this:
batch_dim = 0 seq_dim = 1 input.dims = (4, 8, ...) seq_lengths = [7, 2, 3, 5]
then slices of input are reversed on seq_dim, but only up to seq_lengths:
output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]
while entries past seq_lens are copied through:
output[0, 7:, :, ...] = input[0, 7:, :, ...] output[1, 2:, :, ...] = input[1, 2:, :, ...] output[2, 3:, :, ...] = input[2, 3:, :, ...] output[3, 2:, :, ...] = input[3, 2:, :, ...] ```
In contrast, if:
```prettyprint
Given this:
batch_dim = 2 seq_dim = 0 input.dims = (8, ?, 4, ...) seq_lengths = [7, 2, 3, 5]
then slices of input are reversed on seq_dim, but only up to seq_lengths:
output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]
while entries past seq_lens are copied through:
output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] ```
Args:
input: A Tensor
. The input to reverse.
seq_lengths: A Tensor
. Must be one of the following types: int32
, int64
.
1-D with length input.dims(batch_dim)
and
max(seq_lengths) < input.dims(seq_dim)
seq_dim: An int
. The dimension which is partially reversed.
batch_dim: An optional int
. Defaults to 0
.
The dimension along which reversal is performed.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
The partially reversed input. It has the same shape as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def rnn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rnn(*args, **kwargs)
It accepts the same arguments as tf.nn.rnn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.rnn(x1, *args, **kwargs)
is equivalent to
builder.rnn(*args, **kwargs)(x1)
tf.nn.rnn
Creates a recurrent neural network specified by RNNCell `cell`.
The simplest form of RNN network generated is:
python
state = cell.zero_state(...)
outputs = []
for input_ in inputs:
output, state = cell(input_, state)
outputs.append(output)
return (outputs, state)
However, a few other options are available:
An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.
The dynamic calculation performed is, at time t
for batch row b
,
python
(output, state)(b, t) =
(t >= sequence_length(b))
? (zeros(cell.output_size), states(b, sequence_length(b) - 1))
: cell(input(b, t), state(b, t - 1))
Args:
cell: An instance of RNNCell.
inputs: A length T list of inputs, each a Tensor
of shape
[batch_size, input_size]
, or a nested tuple of such elements.
initial_state: (optional) An initial state for the RNN.
If cell.state_size
is an integer, this must be
a Tensor
of appropriate type and shape [batch_size, cell.state_size]
.
If cell.state_size
is a tuple, this should be a tuple of
tensors having shapes [batch_size, s] for s in cell.state_size
.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
sequence_length: Specifies the length of each sequence in inputs.
An int32 or int64 vector (tensor) size [batch_size]
, values in [0, T)
.
scope: VariableScope for the created subgraph; defaults to "RNN".
Returns: A pair (outputs, state) where: - outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state
Raises:
TypeError: If cell
is not an instance of RNNCell.
ValueError: If inputs
is None
or an empty list, or if the input depth
(column size) cannot be inferred from inputs via shape inference.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def rnn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rnn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.rnn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.rnn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def rnn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rnn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.rnn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.rnn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def rnn_placeholders_from_state(
self, zero_state, name='rnn_state')
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rnn_placeholders_from_state(*args, **kwargs)
It accepts the same arguments as tb.rnn_placeholders_from_state
.
tb.rnn_placeholders_from_state
None
@TensorBuilder.RegisterMethod("tb") def rnn_placeholders_from_state(self, zero_state, name="rnn_state"): if isinstance(zero_state, tuple): return tuple([self.rnn_placeholders_from_state(substate, name=name) for substate in zero_state]) else: return tf.placeholder(zero_state.dtype, shape=zero_state.get_shape(), name=name)
def rnn_state_feed_dict(
self, placeholders, values)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rnn_state_feed_dict(*args, **kwargs)
It accepts the same arguments as tb.rnn_state_feed_dict
.
tb.rnn_state_feed_dict
None
@TensorBuilder.RegisterMethod("tb") def rnn_state_feed_dict(self, placeholders, values): return dict(zip(utils.flatten(placeholders), utils.flatten_list(values)))
def round(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.round(*args, **kwargs)
It accepts the same arguments as tensorflow.round
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.round(x1, *args, **kwargs)
is equivalent to
builder.round(*args, **kwargs)(x1)
tensorflow.round
Rounds the values of a tensor to the nearest integer, element-wise.
For example:
```python
'a' is [0.9, 2.5, 2.3, -4.4]
tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ] ```
Args:
x: A Tensor
of type float32
or float64
.
name: A name for the operation (optional).
Returns:
A Tensor
of same shape and type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def rsqrt(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.rsqrt(*args, **kwargs)
It accepts the same arguments as tensorflow.rsqrt
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.rsqrt(x1, *args, **kwargs)
is equivalent to
builder.rsqrt(*args, **kwargs)(x1)
tensorflow.rsqrt
Computes reciprocal of square root of x element-wise.
I.e., \(y = 1 / \sqrt{x}\).
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sampled_softmax_loss(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sampled_softmax_loss(*args, **kwargs)
It accepts the same arguments as tf.nn.sampled_softmax_loss
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.sampled_softmax_loss(x1, *args, **kwargs)
is equivalent to
builder.sampled_softmax_loss(*args, **kwargs)(x1)
tf.nn.sampled_softmax_loss
Computes and returns the sampled softmax training loss.
This is a faster way to train a softmax classifier over a huge number of classes.
This operation is for training only. It is generally an underestimate of the full softmax loss.
At inference time, you can compute full softmax probabilities with the
expression tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases)
.
See our [Candidate Sampling Algorithms Reference] (../../extras/candidate_sampling.pdf)
Also see Section 3 of Jean et al., 2014 (pdf) for the math.
Args:
weights: A Tensor
of shape [num_classes, dim]
, or a list of Tensor
objects whose concatenation along dimension 0 has shape
[num_classes, dim]. The (possibly-sharded) class embeddings.
biases: A Tensor
of shape [num_classes]
. The class biases.
inputs: A Tensor
of shape [batch_size, dim]
. The forward
activations of the input network.
labels: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes. Note that this format differs from
the labels
argument of nn.softmax_cross_entropy_with_logits
.
num_sampled: An int
. The number of classes to randomly sample per batch.
num_classes: An int
. The number of possible classes.
num_true: An int
. The number of target classes per training example.
sampled_values: a tuple of (sampled_candidates
, true_expected_count
,
sampled_expected_count
) returned by a *_candidate_sampler
function.
(if None, we default to log_uniform_candidate_sampler
)
remove_accidental_hits: A bool
. whether to remove "accidental hits"
where a sampled class equals one of the target classes. Default is
True.
partition_strategy: A string specifying the partitioning strategy, relevant
if len(weights) > 1
. Currently "div"
and "mod"
are supported.
Default is "mod"
. See tf.nn.embedding_lookup
for more details.
name: A name for the operation (optional).
Returns:
A batch_size
1-D tensor of per-example sampled softmax losses.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sampled_softmax_loss_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sampled_softmax_loss_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.sampled_softmax_loss_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sampled_softmax_loss`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sampled_softmax_loss_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sampled_softmax_loss_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.sampled_softmax_loss_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sampled_softmax_loss`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def saturate_cast(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.saturate_cast(*args, **kwargs)
It accepts the same arguments as tensorflow.saturate_cast
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.saturate_cast(x1, *args, **kwargs)
is equivalent to
builder.saturate_cast(*args, **kwargs)(x1)
tensorflow.saturate_cast
Performs a safe saturating cast of `value` to `dtype`.
This function casts the input to dtype
without applying any scaling. If
there is a danger that values would over or underflow in the cast, this op
applies the appropriate clamping before the cast.
Args:
value: A Tensor
.
dtype: The desired output DType
.
name: A name for the operation (optional).
Returns:
value
safely cast to dtype
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scalar_mul(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scalar_mul(*args, **kwargs)
It accepts the same arguments as tensorflow.scalar_mul
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scalar_mul(x1, *args, **kwargs)
is equivalent to
builder.scalar_mul(*args, **kwargs)(x1)
tensorflow.scalar_mul
Multiplies a scalar times a `Tensor` or `IndexedSlices` object.
Intended for use in gradient code which might deal with IndexedSlices
objects, which are easy to multiply by a scalar but more expensive to
multiply with arbitrary tensors.
Args:
scalar: A 0-D scalar Tensor
. Must have known shape.
x: A Tensor
or IndexedSlices
to be scaled.
Returns:
scalar * x
of the same type (Tensor
or IndexedSlices
) as x
.
Raises:
ValueError: if scalar is not a 0-D scalar
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scalar_summary(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scalar_summary(*args, **kwargs)
It accepts the same arguments as tensorflow.scalar_summary
.
However, the 2nd argument is omitted, a partial with the rest of the arguments is returned which expects the 2nd argument such that
tensorflow.scalar_summary(x1, x2, *args, **kwargs)
is equivalent to
builder.scalar_summary(x1, *args, **kwargs)(x2)
tensorflow.scalar_summary
Outputs a `Summary` protocol buffer with scalar values.
The input tags
and values
must have the same shape. The generated
summary has a summary value for each tag-value pair in tags
and values
.
Args:
tags: A string
Tensor
. Tags for the summaries.
values: A real numeric Tensor. Values for the summaries.
collections: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to [GraphKeys.SUMMARIES]
.
name: A name for the operation (optional).
Returns:
A scalar Tensor
of type string
. The serialized Summary
protocol
buffer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then2(fn, *args, **kwargs)
def scan(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scan(*args, **kwargs)
It accepts the same arguments as tensorflow.scan
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scan(x1, *args, **kwargs)
is equivalent to
builder.scan(*args, **kwargs)(x1)
tensorflow.scan
scan on the list of tensors unpacked from `elems` on dimension 0.
The simplest version of scan
repeatedly applies the callable fn
to a
sequence of elements from first to last. The elements are made of the tensors
unpacked from elems
on dimension 0. The callable fn takes two tensors as
arguments. The first argument is the accumulated value computed from the
preceding invocation of fn. If initializer
is None, elems
must contain
at least one element, and its first element is used as the initializer.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is [len(values)] + fn(initializer, values[0]).shape
.
This method also allows multi-arity elems
and accumulator. If elems
is a (possibly nested) list or tuple of tensors, then each of these tensors
must have a matching first (unpack) dimension. The second argument of
fn
must match the structure of elems
.
If no initializer
is provided, the output structure and dtypes of fn
are assumed to be the same as its input; and in this case, the first
argument of fn
must match the structure of elems
.
If an initializer
is provided, then the output of fn
must have the same
structure as initializer
; and the first argument of fn
must match
this structure.
For example, if elems
is (t1, [t2, t3])
and initializer
is
[i1, i2]
then an appropriate signature for fn
in python2
is:
fn = lambda (acc_p1, acc_p2), (t1 [t2, t3]):
and fn
must return a list,
[acc_n1, acc_n2]
. An alternative correct signature for fn
, and the
one that works in python3
, is:
fn = lambda a, t:
, where a
and t
correspond to the input tuples.
Args:
fn: The callable to be performed. It accepts two arguments. The first
will have the same (possibly nested) structure as elems
. The second
will have the same structure as initializer
if one is provided,
otherwise it will have the same structure as elems
. Its output
must have the same structure as initializer
if one is provided,
otherwise it must have the same structure as elems
.
elems: A tensor or (possibly nested) sequence of tensors, each of which
will be unpacked along their first dimension. The nested sequence
of the resulting slices will be the first argument to fn
.
initializer: (optional) A tensor or (possibly nested) sequence of tensors,
initial value for the accumulator, and the expected output type of fn
.
parallel_iterations: (optional) The number of iterations allowed to run
in parallel.
back_prop: (optional) True enables support for back propagation.
swap_memory: (optional) True enables GPU-CPU memory swapping.
infer_shape: (optional) False disables tests for consistent output shapes.
name: (optional) Name prefix for the returned tensors.
Returns:
A tensor or (possibly nested) sequence of tensors. Each tensor packs the
results of applying fn
to tensors unpacked from elems
along the first
dimension, and the previous accumulator value(s), from first to last.
Raises:
TypeError: if fn
is not callable or the structure of the output of
fn
and initializer
do not match.
ValueError: if the lengths of the output of fn
and initializer
do not match.
Examples:
python
elems = np.array([1, 2, 3, 4, 5, 6])
sum = scan(lambda a, x: a + x, elems)
# sum == [1, 3, 6, 10, 15, 21]
python
elems = np.array([1, 2, 3, 4, 5, 6])
initializer = np.array(0)
sum_one = scan(
lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)
# sum_one == [1, 2, 3, 4, 5, 6]
python
elems = np.array([1, 0, 0, 0, 0, 0])
initializer = (np.array(0), np.array(1))
fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)
# fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scatter_add(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scatter_add(*args, **kwargs)
It accepts the same arguments as tensorflow.scatter_add
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scatter_add(x1, *args, **kwargs)
is equivalent to
builder.scatter_add(*args, **kwargs)(x1)
tensorflow.scatter_add
Adds sparse updates to a variable reference.
This operation computes
# Scalar indices ref[indices, ...] += updates[...] # Vector indices (for each i) ref[indices[i], ...] += updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions add.
Requires updates.shape = indices.shape + ref.shape[1:]
.
Args:
ref: A mutable Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Should be from a Variable
node.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A tensor of indices into the first dimension of ref
.
updates: A Tensor
. Must have the same type as ref
.
A tensor of updated values to add to ref
.
use_locking: An optional bool
. Defaults to False
.
If True, the addition will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns:
Same as ref
. Returned as a convenience for operations that want
to use the updated values after the update is done.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scatter_div(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scatter_div(*args, **kwargs)
It accepts the same arguments as tensorflow.scatter_div
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scatter_div(x1, *args, **kwargs)
is equivalent to
builder.scatter_div(*args, **kwargs)(x1)
tensorflow.scatter_div
Divides a variable reference by sparse updates.
This operation computes
# Scalar indices ref[indices, ...] /= updates[...] # Vector indices (for each i) ref[indices[i], ...] /= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions divide.
Requires updates.shape = indices.shape + ref.shape[1:]
.
Args:
ref: A mutable Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Should be from a Variable
node.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A tensor of indices into the first dimension of ref
.
updates: A Tensor
. Must have the same type as ref
.
A tensor of values that ref
is divided by.
use_locking: An optional bool
. Defaults to False
.
If True, the operation will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns:
Same as ref
. Returned as a convenience for operations that want
to use the updated values after the update is done.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scatter_mul(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scatter_mul(*args, **kwargs)
It accepts the same arguments as tensorflow.scatter_mul
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scatter_mul(x1, *args, **kwargs)
is equivalent to
builder.scatter_mul(*args, **kwargs)(x1)
tensorflow.scatter_mul
Multiplies sparse updates into a variable reference.
This operation computes
# Scalar indices ref[indices, ...] *= updates[...] # Vector indices (for each i) ref[indices[i], ...] *= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions multiply.
Requires updates.shape = indices.shape + ref.shape[1:]
.
Args:
ref: A mutable Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Should be from a Variable
node.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A tensor of indices into the first dimension of ref
.
updates: A Tensor
. Must have the same type as ref
.
A tensor of updated values to multiply to ref
.
use_locking: An optional bool
. Defaults to False
.
If True, the operation will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns:
Same as ref
. Returned as a convenience for operations that want
to use the updated values after the update is done.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scatter_sub(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scatter_sub(*args, **kwargs)
It accepts the same arguments as tensorflow.scatter_sub
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scatter_sub(x1, *args, **kwargs)
is equivalent to
builder.scatter_sub(*args, **kwargs)(x1)
tensorflow.scatter_sub
Subtracts sparse updates to a variable reference. # Scalar indices ref[indices, ...] -= updates[...] # Vector indices (for each i) ref[indices[i], ...] -= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their (negated) contributions add.
Requires updates.shape = indices.shape + ref.shape[1:]
.
Args:
ref: A mutable Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
Should be from a Variable
node.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A tensor of indices into the first dimension of ref
.
updates: A Tensor
. Must have the same type as ref
.
A tensor of updated values to subtract from ref
.
use_locking: An optional bool
. Defaults to False
.
If True, the subtraction will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns:
Same as ref
. Returned as a convenience for operations that want
to use the updated values after the update is done.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def scatter_update(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.scatter_update(*args, **kwargs)
It accepts the same arguments as tensorflow.scatter_update
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.scatter_update(x1, *args, **kwargs)
is equivalent to
builder.scatter_update(*args, **kwargs)(x1)
tensorflow.scatter_update
Applies sparse updates to a variable reference.
This operation computes
# Scalar indices ref[indices, ...] = updates[...] # Vector indices (for each i) ref[indices[i], ...] = updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
If values in ref
is to be updated more than once, because there are
duplicate entires in indices
, the order at which the updates happen
for each value is undefined.
Requires updates.shape = indices.shape + ref.shape[1:]
.
Args:
ref: A mutable Tensor
. Should be from a Variable
node.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A tensor of indices into the first dimension of ref
.
updates: A Tensor
. Must have the same type as ref
.
A tensor of updated values to store in ref
.
use_locking: An optional bool
. Defaults to True
.
If True, the assignment will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
name: A name for the operation (optional).
Returns:
Same as ref
. Returned as a convenience for operations that want
to use the updated values after the update is done.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def segment_max(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.segment_max(*args, **kwargs)
It accepts the same arguments as tensorflow.segment_max
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.segment_max(x1, *args, **kwargs)
is equivalent to
builder.segment_max(*args, **kwargs)(x1)
tensorflow.segment_max
Computes the maximum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \max_j(data_j)\) where max
is over j
such
that segment_ids[j] == i
.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
segment_ids: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor whose rank is equal to the rank of data
's
first dimension. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def segment_mean(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.segment_mean(*args, **kwargs)
It accepts the same arguments as tensorflow.segment_mean
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.segment_mean(x1, *args, **kwargs)
is equivalent to
builder.segment_mean(*args, **kwargs)(x1)
tensorflow.segment_mean
Computes the mean along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \frac{\sum_j data_j}{N}\) where mean
is
over j
such that segment_ids[j] == i
and N
is the total number of
values summed.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
segment_ids: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor whose rank is equal to the rank of data
's
first dimension. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def segment_min(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.segment_min(*args, **kwargs)
It accepts the same arguments as tensorflow.segment_min
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.segment_min(x1, *args, **kwargs)
is equivalent to
builder.segment_min(*args, **kwargs)(x1)
tensorflow.segment_min
Computes the minimum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \min_j(data_j)\) where min
is over j
such
that segment_ids[j] == i
.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
segment_ids: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor whose rank is equal to the rank of data
's
first dimension. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def segment_prod(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.segment_prod(*args, **kwargs)
It accepts the same arguments as tensorflow.segment_prod
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.segment_prod(x1, *args, **kwargs)
is equivalent to
builder.segment_prod(*args, **kwargs)(x1)
tensorflow.segment_prod
Computes the product along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \prod_j data_j\) where the product is over j
such
that segment_ids[j] == i
.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
segment_ids: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor whose rank is equal to the rank of data
's
first dimension. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def segment_sum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.segment_sum(*args, **kwargs)
It accepts the same arguments as tensorflow.segment_sum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.segment_sum(x1, *args, **kwargs)
is equivalent to
builder.segment_sum(*args, **kwargs)(x1)
tensorflow.segment_sum
Computes the sum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \sum_j data_j\) where sum is over j
such
that segment_ids[j] == i
.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
segment_ids: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor whose rank is equal to the rank of data
's
first dimension. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def select(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.select(*args, **kwargs)
It accepts the same arguments as tensorflow.select
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.select(x1, *args, **kwargs)
is equivalent to
builder.select(*args, **kwargs)(x1)
tensorflow.select
Selects elements from `t` or `e`, depending on `condition`.
The t
, and e
tensors must all have the same shape,
and the output will also have that shape. The condition
tensor
must be a scalar if t
and e
are scalars. If t
and e
are vectors
or higher rank, then condition
must be either a vector with size
matching the first dimension of t
, or must have the same shape as t
.
The condition
tensor acts as a mask that chooses, based on the value at each
element, whether the corresponding element / row in the output should be
taken from t
(if true) or e
(if false).
If condition
is a vector and t
and e
are higher rank matrices, then
it chooses which row (outer dimension) to copy from t
and e
.
If condition
has the same shape as t
and e
, then it chooses which
element to copy from t
and e
.
For example:
```prettyprint
'condition' tensor is [[True, False]
[False, True]]
't' is [[1, 2],
[3, 4]]
'e' is [[5, 6],
[7, 8]]
select(condition, t, e) ==> [[1, 6], [7, 4]]
'condition' tensor is [True, False]
't' is [[1, 2],
[3, 4]]
'e' is [[5, 6],
[7, 8]]
select(condition, t, e) ==> [[1, 2], [7, 8]]
```
Args:
condition: A Tensor
of type bool
.
t: A Tensor
which may have the same shape as condition
.
If condition
is rank 1, t
may have higher rank,
but its first dimension must match the size of condition
.
e: A Tensor
with the same type and shape as t
.
name: A name for the operation (optional).
Returns:
A Tensor
with the same type and shape as t
and e
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def self_adjoint_eig(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.self_adjoint_eig(*args, **kwargs)
It accepts the same arguments as tensorflow.self_adjoint_eig
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.self_adjoint_eig(x1, *args, **kwargs)
is equivalent to
builder.self_adjoint_eig(*args, **kwargs)(x1)
tensorflow.self_adjoint_eig
Computes the eigen decomposition of a batch of self-adjoint matrices.
Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices
in tensor
such that
tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]
, for i=0...N-1.
Args:
tensor: Tensor
of shape [..., N, N]
. Only the lower triangular part of
each inner inner matrix is referenced.
name: string, optional name of the operation.
Returns:
e: Eigenvalues. Shape is [..., N]
.
v: Eigenvectors. Shape is [..., N, N]
. The columns of the inner most
matrices contain eigenvectors of the corresponding matrices in tensor
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def self_adjoint_eigvals(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.self_adjoint_eigvals(*args, **kwargs)
It accepts the same arguments as tensorflow.self_adjoint_eigvals
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.self_adjoint_eigvals(x1, *args, **kwargs)
is equivalent to
builder.self_adjoint_eigvals(*args, **kwargs)(x1)
tensorflow.self_adjoint_eigvals
Computes the eigenvalues of one or more self-adjoint matrices.
Args:
tensor: Tensor
of shape [..., N, N]
.
name: string, optional name of the operation.
Returns:
e: Eigenvalues. Shape is [..., N]
. The vector e[..., :]
contains the N
eigenvalues of tensor[..., :, :]
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def separable_conv2d(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.separable_conv2d(*args, **kwargs)
It accepts the same arguments as tf.nn.separable_conv2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.separable_conv2d(x1, *args, **kwargs)
is equivalent to
builder.separable_conv2d(*args, **kwargs)(x1)
tf.nn.separable_conv2d
2-D convolution with separable filters.
Performs a depthwise convolution that acts separately on channels followed by
a pointwise convolution that mixes channels. Note that this is separability
between dimensions [1, 2]
and 3
, not spatial separability between
dimensions 1
and 2
.
In detail,
output[b, i, j, k] = sum_{di, dj, q, r] input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k]
strides
controls the strides for the depthwise convolution only, since
the pointwise convolution has implicit strides of [1, 1, 1, 1]
. Must have
strides[0] = strides[3] = 1
. For the most common case of the same
horizontal and vertical strides, strides = [1, stride, stride, 1]
.
Args:
input: 4-D Tensor
with shape [batch, in_height, in_width, in_channels]
.
depthwise_filter: 4-D Tensor
with shape
[filter_height, filter_width, in_channels, channel_multiplier]
.
Contains in_channels
convolutional filters of depth 1.
pointwise_filter: 4-D Tensor
with shape
[1, 1, channel_multiplier * in_channels, out_channels]
. Pointwise
filter to mix channels after depthwise_filter
has convolved spatially.
strides: 1-D of size 4. The strides for the depthwise convolution for
each dimension of input
.
padding: A string, either 'VALID'
or 'SAME'
. The padding algorithm.
See the comment
here
name: A name for this operation (optional).
Returns:
A 4-D Tensor
of shape [batch, out_height, out_width, out_channels]
.
Raises: ValueError: If channel_multiplier * in_channels > out_channels, which means that the separable convolution is overparameterized.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def separable_conv2d_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.separable_conv2d_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.separable_conv2d_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.separable_conv2d`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def separable_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.separable_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.separable_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.separable_conv2d`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sequence_mask(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sequence_mask(*args, **kwargs)
It accepts the same arguments as tensorflow.sequence_mask
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sequence_mask(x1, *args, **kwargs)
is equivalent to
builder.sequence_mask(*args, **kwargs)(x1)
tensorflow.sequence_mask
Return a mask tensor representing the first N positions of each row.
Example:
python
tf.sequence_mask([1, 3, 2], 5) =
[[True, False, False, False, False],
[True, True, True, False, False],
[True, True, False, False, False]]
Args: lengths: 1D integer tensor, all its values < maxlen. maxlen: scalar integer tensor, maximum length of each row. Default: use maximum over lengths. dtype: output type of the resulting tensor. name: name of the op. Returns: A 2D mask tensor, as shown in the example above, cast to specified dtype.
Raises: ValueError: if the arguments have invalid rank.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def serialize_many_sparse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.serialize_many_sparse(*args, **kwargs)
It accepts the same arguments as tensorflow.serialize_many_sparse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.serialize_many_sparse(x1, *args, **kwargs)
is equivalent to
builder.serialize_many_sparse(*args, **kwargs)(x1)
tensorflow.serialize_many_sparse
Serialize an `N`-minibatch `SparseTensor` into an `[N, 3]` string `Tensor`.
The SparseTensor
must have rank R
greater than 1, and the first dimension
is treated as the minibatch dimension. Elements of the SparseTensor
must be sorted in increasing order of this first dimension. The serialized
SparseTensor
objects going into each row of the output Tensor
will have
rank R-1
.
The minibatch size N
is extracted from sparse_shape[0]
.
Args:
sp_input: The input rank R
SparseTensor
.
name: A name prefix for the returned tensors (optional).
Returns:
A string matrix (2-D Tensor
) with N
rows and 3
columns.
Each column represents serialized SparseTensor
's indices, values, and
shape (respectively).
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def serialize_sparse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.serialize_sparse(*args, **kwargs)
It accepts the same arguments as tensorflow.serialize_sparse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.serialize_sparse(x1, *args, **kwargs)
is equivalent to
builder.serialize_sparse(*args, **kwargs)(x1)
tensorflow.serialize_sparse
Serialize a `SparseTensor` into a string 3-vector (1-D `Tensor`) object.
Args:
sp_input: The input SparseTensor
.
name: A name prefix for the returned tensors (optional).
Returns:
A string 3-vector (1D Tensor
), with each column representing the
serialized SparseTensor
's indices, values, and shape (respectively).
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def set_random_seed(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.set_random_seed(*args, **kwargs)
It accepts the same arguments as tensorflow.set_random_seed
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.set_random_seed(x1, *args, **kwargs)
is equivalent to
builder.set_random_seed(*args, **kwargs)(x1)
tensorflow.set_random_seed
Sets the graph-level random seed.
Operations that rely on a random seed actually derive it from two seeds: the graph-level and operation-level seeds. This sets the graph-level seed.
Its interactions with operation-level seeds is as follows:
- If neither the graph-level nor the operation seed is set: A random seed is used for this op.
- If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.
- If the graph-level seed is not set, but the operation seed is set: A default graph-level seed and the specified operation seed are used to determine the random sequence.
- If both the graph-level and the operation seed are set: Both seeds are used in conjunction to determine the random sequence.
To illustrate the user-visible effects, consider these examples:
To generate different sequences across sessions, set neither graph-level nor op-level seeds:
```python a = tf.random_uniform([1]) b = tf.random_normal([1])
print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'
print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A3' print(sess2.run(a)) # generates 'A4' print(sess2.run(b)) # generates 'B3' print(sess2.run(b)) # generates 'B4' ```
To generate the same repeatable sequence for an op across sessions, set the seed for the op:
```python a = tf.random_uniform([1], seed=1) b = tf.random_normal([1])
Repeatedly running this block with the same graph will generate the same
sequence of values for 'a', but different sequences of values for 'b'.
print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'
print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A1' print(sess2.run(a)) # generates 'A2' print(sess2.run(b)) # generates 'B3' print(sess2.run(b)) # generates 'B4' ```
To make the random sequences generated by all ops be repeatable across sessions, set a graph-level seed:
```python tf.set_random_seed(1234) a = tf.random_uniform([1]) b = tf.random_normal([1])
Repeatedly running this block with the same graph will generate different
sequences of 'a' and 'b'.
print("Session 1") with tf.Session() as sess1: print(sess1.run(a)) # generates 'A1' print(sess1.run(a)) # generates 'A2' print(sess1.run(b)) # generates 'B1' print(sess1.run(b)) # generates 'B2'
print("Session 2") with tf.Session() as sess2: print(sess2.run(a)) # generates 'A1' print(sess2.run(a)) # generates 'A2' print(sess2.run(b)) # generates 'B1' print(sess2.run(b)) # generates 'B2' ```
Args: seed: integer.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def shape(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.shape(*args, **kwargs)
It accepts the same arguments as tensorflow.shape
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.shape(x1, *args, **kwargs)
is equivalent to
builder.shape(*args, **kwargs)(x1)
tensorflow.shape
Returns the shape of a tensor.
This operation returns a 1-D integer tensor representing the shape of input
.
For example:
```python
't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3] ```
Args:
input: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
out_type: (Optional) The specified output type of the operation
(int32
or int64
). Defaults to tf.int32.
Returns:
A Tensor
of type out_type
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def shape_n(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.shape_n(*args, **kwargs)
It accepts the same arguments as tensorflow.shape_n
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.shape_n(x1, *args, **kwargs)
is equivalent to
builder.shape_n(*args, **kwargs)(x1)
tensorflow.shape_n
Returns shape of tensors.
This operation returns N 1-D integer tensors representing shape of input[i]s
.
Args:
input: A list of at least 1 Tensor
objects of the same type.
out_type: An optional tf.DType
from: tf.int32, tf.int64
. Defaults to tf.int32
.
name: A name for the operation (optional).
Returns:
A list with the same number of Tensor
objects as input
of Tensor
objects of type out_type.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sigmoid(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sigmoid(*args, **kwargs)
It accepts the same arguments as tf.nn.sigmoid
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.sigmoid(x1, *args, **kwargs)
is equivalent to
builder.sigmoid(*args, **kwargs)(x1)
tf.nn.sigmoid
Computes sigmoid of `x` element-wise.
Specifically, y = 1 / (1 + exp(-x))
.
Args:
x: A Tensor with type float32
, float64
, int32
, complex64
, int64
,
or qint32
.
name: A name for the operation (optional).
Returns:
A Tensor with the same type as x
if x.dtype != qint32
otherwise the return type is quint8
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sigmoid_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sigmoid_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.sigmoid_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sigmoid`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sigmoid_cross_entropy_with_logits(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sigmoid_cross_entropy_with_logits(*args, **kwargs)
It accepts the same arguments as tf.nn.sigmoid_cross_entropy_with_logits
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.sigmoid_cross_entropy_with_logits(x1, *args, **kwargs)
is equivalent to
builder.sigmoid_cross_entropy_with_logits(*args, **kwargs)(x1)
tf.nn.sigmoid_cross_entropy_with_logits
Computes sigmoid cross entropy given `logits`.
Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.
For brevity, let x = logits
, z = targets
. The logistic loss is
z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + log(1 + exp(-x)) = x - x * z + log(1 + exp(-x))
For x < 0, to avoid overflow in exp(-x), we reformulate the above
x - x * z + log(1 + exp(-x)) = log(exp(x)) - x * z + log(1 + exp(-x)) = - x * z + log(1 + exp(x))
Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation
max(x, 0) - x * z + log(1 + exp(-abs(x)))
logits
and targets
must have the same type and shape.
Args:
logits: A Tensor
of type float32
or float64
.
targets: A Tensor
of the same type and shape as logits
.
name: A name for the operation (optional).
Returns:
A Tensor
of the same shape as logits
with the componentwise
logistic losses.
Raises:
ValueError: If logits
and targets
do not have the same shape.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sigmoid_cross_entropy_with_logits_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sigmoid_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.sigmoid_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sigmoid_cross_entropy_with_logits`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sigmoid_cross_entropy_with_logits_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sigmoid_cross_entropy_with_logits_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.sigmoid_cross_entropy_with_logits_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sigmoid_cross_entropy_with_logits`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sigmoid_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sigmoid_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.sigmoid_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sigmoid`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sign(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sign(*args, **kwargs)
It accepts the same arguments as tensorflow.sign
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sign(x1, *args, **kwargs)
is equivalent to
builder.sign(*args, **kwargs)(x1)
tensorflow.sign
Returns an element-wise indication of the sign of a number.
y = sign(x) = -1
if x < 0
; 0 if x == 0
; 1 if x > 0
.
For complex numbers, y = sign(x) = x / |x|
if x != 0
, otherwise y = 0
.
Args:
x: A Tensor
or SparseTensor
. Must be one of the following types: half
,
float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
, respectively. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sin(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sin(*args, **kwargs)
It accepts the same arguments as tensorflow.sin
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sin(x1, *args, **kwargs)
is equivalent to
builder.sin(*args, **kwargs)(x1)
tensorflow.sin
Computes sin of x element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def size(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.size(*args, **kwargs)
It accepts the same arguments as tensorflow.size
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.size(x1, *args, **kwargs)
is equivalent to
builder.size(*args, **kwargs)(x1)
tensorflow.size
Returns the size of a tensor.
This operation returns an integer representing the number of elements in
input
.
For example:
```python
't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12 ```
Args:
input: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
out_type: (Optional) The specified output type of the operation
(int32
or int64
). Defaults to tf.int32.
Returns:
A Tensor
of type out_type
. Defaults to tf.int32.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def slice(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.slice(*args, **kwargs)
It accepts the same arguments as tensorflow.slice
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.slice(x1, *args, **kwargs)
is equivalent to
builder.slice(*args, **kwargs)(x1)
tensorflow.slice
Extracts a slice from a tensor.
This operation extracts a slice of size size
from a tensor input
starting
at the location specified by begin
. The slice size
is represented as a
tensor shape, where size[i]
is the number of elements of the 'i'th dimension
of input
that you want to slice. The starting location (begin
) for the
slice is represented as an offset in each dimension of input
. In other
words, begin[i]
is the offset into the 'i'th dimension of input
that you
want to slice from.
begin
is zero-based; size
is one-based. If size[i]
is -1,
all remaining elements in dimension i are included in the
slice. In other words, this is equivalent to setting:
size[i] = input.dim_size(i) - begin[i]
This operation requires that:
0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]
For example:
```
'input' is [[[1, 1, 1], [2, 2, 2]],
[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]]
tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], [[5, 5, 5]]] ```
Args:
input_: A Tensor
.
begin: An int32
or int64
Tensor
.
size: An int32
or int64
Tensor
.
name: A name for the operation (optional).
Returns:
A Tensor
the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softmax(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softmax(*args, **kwargs)
It accepts the same arguments as tf.nn.softmax
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.softmax(x1, *args, **kwargs)
is equivalent to
builder.softmax(*args, **kwargs)(x1)
tf.nn.softmax
Computes log softmax activations.
For each batch i
and class j
we have
softmax = exp(logits) / reduce_sum(exp(logits), dim)
Args:
logits: A non-empty Tensor
. Must be one of the following types: half
,
float32
, float64
.
dim: The dimension softmax would be performed on. The default is -1 which
indicates the last dimension.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as logits
. Same shape as logits
.
Raises:
InvalidArgumentError: if logits
is empty or dim
is beyond the last
dimension of logits
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softmax_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softmax_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.softmax_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softmax`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softmax_cross_entropy_with_logits(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softmax_cross_entropy_with_logits(*args, **kwargs)
It accepts the same arguments as tf.nn.softmax_cross_entropy_with_logits
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.softmax_cross_entropy_with_logits(x1, *args, **kwargs)
is equivalent to
builder.softmax_cross_entropy_with_logits(*args, **kwargs)(x1)
tf.nn.softmax_cross_entropy_with_logits
Computes softmax cross entropy between `logits` and `labels`.
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
NOTE: While the classes are mutually exclusive, their probabilities
need not be. All that is required is that each row of labels
is
a valid probability distribution. If they are not, the computation of the
gradient will be incorrect.
If using exclusive labels
(wherein one and only
one class is true at a time), see sparse_softmax_cross_entropy_with_logits
.
WARNING: This op expects unscaled logits, since it performs a softmax
on logits
internally for efficiency. Do not call this op with the
output of softmax
, as it will produce incorrect results.
logits
and labels
must have the same shape [batch_size, num_classes]
and the same dtype (either float16
, float32
, or float64
).
Args:
logits: Unscaled log probabilities.
labels: Each row labels[i]
must be a valid probability distribution.
dim: The class dimension. Defaulted to -1 which is the last dimension.
name: A name for the operation (optional).
Returns:
A 1-D Tensor
of length batch_size
of the same type as logits
with the
softmax cross entropy loss.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softmax_cross_entropy_with_logits_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softmax_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.softmax_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softmax_cross_entropy_with_logits`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softmax_cross_entropy_with_logits_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softmax_cross_entropy_with_logits_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.softmax_cross_entropy_with_logits_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softmax_cross_entropy_with_logits`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softmax_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softmax_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.softmax_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softmax`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softplus(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softplus(*args, **kwargs)
It accepts the same arguments as tf.nn.softplus
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.softplus(x1, *args, **kwargs)
is equivalent to
builder.softplus(*args, **kwargs)(x1)
tf.nn.softplus
Computes softplus: `log(exp(features) + 1)`.
Args:
features: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as features
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softplus_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softplus_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.softplus_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softplus`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softplus_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softplus_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.softplus_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softplus`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softsign(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softsign(*args, **kwargs)
It accepts the same arguments as tf.nn.softsign
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.softsign(x1, *args, **kwargs)
is equivalent to
builder.softsign(*args, **kwargs)(x1)
tf.nn.softsign
Computes softsign: `features / (abs(features) + 1)`.
Args:
features: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as features
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softsign_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softsign_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.softsign_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softsign`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def softsign_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.softsign_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.softsign_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.softsign`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def space_to_batch(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.space_to_batch(*args, **kwargs)
It accepts the same arguments as tensorflow.space_to_batch
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.space_to_batch(x1, *args, **kwargs)
is equivalent to
builder.space_to_batch(*args, **kwargs)(x1)
tensorflow.space_to_batch
SpaceToBatch for 4-D tensors of type T.
This is a legacy version of the more general SpaceToBatchND.
Zero-pads and then rearranges (permutes) blocks of spatial data into batch.
More specifically, this op outputs a copy of the input tensor where values from
the height
and width
dimensions are moved to the batch
dimension. After
the zero-padding, both height
and width
of the input must be divisible by the
block size.
Args:
input: A Tensor
. 4-D with shape [batch, height, width, depth]
.
paddings: A Tensor
. Must be one of the following types: int32
, int64
.
2-D tensor of non-negative integers with shape [2, 2]
. It specifies
the padding of the input with zeros across the spatial dimensions as follows:
paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] The effective spatial dimensions of the zero-padded input tensor will be: height_pad = pad_top + height + pad_bottom width_pad = pad_left + width + pad_right The attr `block_size` must be greater than one. It indicates the block size. * Non-overlapping blocks of size `block_size x block size` in the height and width dimensions are rearranged into the batch dimension at each location. * The batch of the output tensor is `batch * block_size * block_size`. * Both height_pad and width_pad must be divisible by block_size. The shape of the output will be: [batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth] Some examples: (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2: ```prettyprint x = [[[[1], [2]], [[3], [4]]]] ``` The output tensor has shape `[4, 1, 1, 1]` and value: ```prettyprint [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] ``` (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2: ```prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] ``` The output tensor has shape `[4, 1, 1, 3]` and value: ```prettyprint [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] ``` (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2: ```prettyprint x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ``` The output tensor has shape `[4, 2, 2, 1]` and value: ```prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] ``` (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2: ```prettyprint x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ``` The output tensor has shape `[8, 1, 2, 1]` and value: ```prettyprint x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] ``` Among others, this operation is useful for reducing atrous convolution into regular convolution.
block_size: An int
that is >= 2
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def space_to_batch_nd(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.space_to_batch_nd(*args, **kwargs)
It accepts the same arguments as tensorflow.space_to_batch_nd
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.space_to_batch_nd(x1, *args, **kwargs)
is equivalent to
builder.space_to_batch_nd(*args, **kwargs)(x1)
tensorflow.space_to_batch_nd
SpaceToBatch for N-D tensors of type T.
This operation divides "spatial" dimensions [1, ..., M]
of the input into a
grid of blocks of shape block_shape
, and interleaves these blocks with the
"batch" dimension (0) such that in the output, the spatial dimensions
[1, ..., M]
correspond to the position within the grid, and the batch
dimension combines both the position within a spatial block and the original
batch position. Prior to division into blocks, the spatial dimensions of the
input are optionally zero padded according to paddings
. See below for a
precise description.
Args:
input: A Tensor
.
N-D with shape input_shape = [batch] + spatial_shape + remaining_shape
,
where spatial_shape has M
dimensions.
block_shape: A Tensor
. Must be one of the following types: int32
, int64
.
1-D with shape [M]
, all values must be >= 1.
paddings: A Tensor
. Must be one of the following types: int32
, int64
.
2-D with shape [M, 2]
, all values must be >= 0.
paddings[i] = [pad_start, pad_end]
specifies the padding for input dimension
i + 1
, which corresponds to spatial dimension i
. It is required that
block_shape[i]
divides input_shape[i + 1] + pad_start + pad_end
.
This operation is equivalent to the following steps: 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the input according to `paddings` to produce `padded` of shape `padded_shape`. 2. Reshape `padded` to `reshaped_padded` of shape: [batch] + [padded_shape[1] / block_shape[0], block_shape[0], ..., padded_shape[M] / block_shape[M-1], block_shape[M-1]] + remaining_shape 3. Permute dimensions of `reshaped_padded` to produce `permuted_reshaped_padded` of shape: block_shape + [batch] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch dimension, producing an output tensor of shape: [batch * prod(block_shape)] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape Some examples: (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and `paddings = [[0, 0], [0, 0]]`: ```prettyprint x = [[[[1], [2]], [[3], [4]]]] ``` The output tensor has shape `[4, 1, 1, 1]` and value: ```prettyprint [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] ``` (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and `paddings = [[0, 0], [0, 0]]`: ```prettyprint x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] ``` The output tensor has shape `[4, 1, 1, 3]` and value: ```prettyprint [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] ``` (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and `paddings = [[0, 0], [0, 0]]`: ```prettyprint x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ``` The output tensor has shape `[4, 2, 2, 1]` and value: ```prettyprint x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] ``` (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and paddings = `[[0, 0], [2, 0]]`: ```prettyprint x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] ``` The output tensor has shape `[8, 1, 3, 1]` and value: ```prettyprint x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]] ``` Among others, this operation is useful for reducing atrous convolution into regular convolution.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def space_to_depth(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.space_to_depth(*args, **kwargs)
It accepts the same arguments as tensorflow.space_to_depth
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.space_to_depth(x1, *args, **kwargs)
is equivalent to
builder.space_to_depth(*args, **kwargs)(x1)
tensorflow.space_to_depth
SpaceToDepth for tensors of type T.
Rearranges blocks of spatial data, into depth. More specifically,
this op outputs a copy of the input tensor where values from the height
and width
dimensions are moved to the depth
dimension.
The attr block_size
indicates the input block size and how the data is moved.
- Non-overlapping blocks of size
block_size x block size
are rearranged into depth at each location. - The depth of the output tensor is
input_depth * block_size * block_size
. - The input tensor's height and width must be divisible by block_size.
That is, assuming the input is in the shape:
[batch, height, width, depth]
,
the shape of the output will be:
[batch, height/block_size, width/block_size, depth*block_size*block_size]
This operation requires that the input tensor be of rank 4, and that
block_size
be >=1 and a divisor of both the input height
and width
.
This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.
For example, given this input of shape [1, 2, 2, 1]
, and block_size of 2:
prettyprint
x = [[[[1], [2]],
[[3], [4]]]]
This operation will output a tensor of shape [1, 1, 1, 4]
:
prettyprint
[[[[1, 2, 3, 4]]]]
Here, the input has a batch of 1 and each batch element has shape [2, 2, 1]
,
the corresponding output will have a single element (i.e. width and height are
both 1) and will have a depth of 4 channels (1 * block_size * block_size).
The output element shape is [1, 1, 4]
.
For an input tensor with larger depth, here of shape [1, 2, 2, 3]
, e.g.
prettyprint
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
This operation, for block_size of 2, will return the following tensor of shape
[1, 1, 1, 12]
prettyprint
[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
Similarly, for the following input of shape [1 4 4 1]
, and a block size of 2:
prettyprint
x = [[[[1], [2], [5], [6]],
[[3], [4], [7], [8]],
[[9], [10], [13], [14]],
[[11], [12], [15], [16]]]]
the operator will return the following tensor of shape [1 2 2 4]
:
prettyprint
x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
Args:
input: A Tensor
.
block_size: An int
that is >= 2
. The size of the spatial block.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_add(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_add(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_add
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_add(x1, *args, **kwargs)
is equivalent to
builder.sparse_add(*args, **kwargs)(x1)
tensorflow.sparse_add
Adds two tensors, at least one of each is a `SparseTensor`.
If one SparseTensor
and one Tensor
are passed in, returns a Tensor
. If
both arguments are SparseTensor
s, this returns a SparseTensor
. The order
of arguments does not matter. Use vanilla tf.add()
for adding two dense
Tensor
s.
The indices of any input SparseTensor
are assumed ordered in standard
lexicographic order. If this is not the case, before this step run
SparseReorder
to restore index ordering.
If both arguments are sparse, we perform "clipping" as follows. By default,
if two values sum to zero at some index, the output SparseTensor
would still
include that particular location in its index, storing a zero in the
corresponding value slot. To override this, callers can specify thresh
,
indicating that if the sum has a magnitude strictly smaller than thresh
, its
corresponding value and index would then not be included. In particular,
thresh == 0.0
(default) means everything is kept and actual thresholding
happens only for a positive value.
For example, suppose the logical sum of two sparse operands is (densified):
[ 2] [.1 0] [ 6 -.2]
Then,
- thresh == 0 (the default): all 5 index/value pairs will be returned. - thresh == 0.11: only .1 and 0 will vanish, and the remaining three index/value pairs will be returned. - thresh == 0.21: .1, 0, and -.2 will vanish.
Args:
a: The first operand; SparseTensor
or Tensor
.
b: The second operand; SparseTensor
or Tensor
. At least one operand
must be sparse.
thresh: A 0-D Tensor
. The magnitude threshold that determines if an
output value/index pair takes space. Its dtype should match that of the
values if they are real; if the latter are complex64/complex128, then the
dtype should be float32/float64, correspondingly.
Returns:
A SparseTensor
or a Tensor
, representing the sum.
Raises:
TypeError: If both a
and b
are Tensor
s. Use tf.add()
instead.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_concat(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_concat(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_concat
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_concat(x1, *args, **kwargs)
is equivalent to
builder.sparse_concat(*args, **kwargs)(x1)
tensorflow.sparse_concat
Concatenates a list of `SparseTensor` along the specified dimension.
Concatenation is with respect to the dense versions of each sparse input.
It is assumed that each inputs is a SparseTensor
whose elements are ordered
along increasing dimension number.
If expand_nonconcat_dim is False, all inputs' shapes must match, except for the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are allowd to vary among all inputs.
The indices
, values
, and shapes
lists must have the same length.
If expand_nonconcat_dim is False, then the output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension.
If expand_nonconcat_dim is True, then the output shape along the non-concat dimensions will be expand to be the largest among all inputs, and it is the sum of the inputs sizes along the concat dimension.
The output elements will be resorted to preserve the sort order along increasing dimension number.
This op runs in O(M log M)
time, where M
is the total number of non-empty
values across all inputs. This is due to the need for an internal sort in
order to concatenate efficiently across an arbitrary dimension.
For example, if concat_dim = 1
and the inputs are
sp_inputs[0]: shape = [2, 3] [0, 2]: "a" [1, 0]: "b" [1, 1]: "c" sp_inputs[1]: shape = [2, 4] [0, 1]: "d" [0, 2]: "e"
then the output will be
shape = [2, 7] [0, 2]: "a" [0, 4]: "d" [0, 5]: "e" [1, 0]: "b" [1, 1]: "c"
Graphically this is equivalent to doing
[ a] concat [ d e ] = [ a d e ] [b c ] [ ] [b c ]
Another example, if 'concat_dim = 1' and the inputs are
sp_inputs[0]: shape = [3, 3] [0, 2]: "a" [1, 0]: "b" [2, 1]: "c" sp_inputs[1]: shape = [2, 4] [0, 1]: "d" [0, 2]: "e"
if expand_nonconcat_dim = False, this will result in an error. But if expand_nonconcat_dim = True, this will result in:
shape = [3, 7] [0, 2]: "a" [0, 4]: "d" [0, 5]: "e" [1, 0]: "b" [2, 1]: "c"
Graphically this is equivalent to doing
[ a] concat [ d e ] = [ a d e ] [b ] [ ] [b ] [ c ] [ c ]
Args:
concat_dim: Dimension to concatenate along. Must be in range [-rank, rank),
where rank is the number of dimensions in each input SparseTensor
.
sp_inputs: List of SparseTensor
to concatenate.
name: A name prefix for the returned tensors (optional).
expand_nonconcat_dim: Whether to allow the expansion in the non-concat
dimensions. Defaulted to False.
Returns:
A SparseTensor
with the concatenated output.
Raises:
TypeError: If sp_inputs
is not a list of SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_fill_empty_rows(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_fill_empty_rows(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_fill_empty_rows
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_fill_empty_rows(x1, *args, **kwargs)
is equivalent to
builder.sparse_fill_empty_rows(*args, **kwargs)(x1)
tensorflow.sparse_fill_empty_rows
Fills empty rows in the input 2-D `SparseTensor` with a default value.
This op adds entries with the specified default_value
at index
[row, 0]
for any row in the input that does not already have a value.
For example, suppose sp_input
has shape [5, 6]
and non-empty values:
[0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d
Rows 1 and 4 are empty, so the output will be of shape [5, 6]
with values:
[0, 1]: a [0, 3]: b [1, 0]: default_value [2, 0]: c [3, 1]: d [4, 0]: default_value
Note that the input may have empty columns at the end, with no effect on this op.
The output SparseTensor
will be in row-major order and will have the
same shape as the input.
This op also returns an indicator vector such that
empty_row_indicator[i] = True iff row i was an empty row.
Args:
sp_input: A SparseTensor
with shape [N, M]
.
default_value: The value to fill for empty rows, with the same type as
sp_input.
name: A name prefix for the returned tensors (optional)
Returns:
sp_ordered_output: A SparseTensor
with shape [N, M]
, and with all empty
rows filled in with default_value
.
empty_row_indicator: A bool vector of length N
indicating whether each
input row was empty.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_mask(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_mask(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_mask
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_mask(x1, *args, **kwargs)
is equivalent to
builder.sparse_mask(*args, **kwargs)(x1)
tensorflow.sparse_mask
Masks elements of `IndexedSlices`.
Given an IndexedSlices
instance a
, returns another IndexedSlices
that
contains a subset of the slices of a
. Only the slices at indices not
specified in mask_indices
are returned.
This is useful when you need to extract a subset of slices in an
IndexedSlices
object.
For example:
```python
a
contains slices at indices [12, 26, 37, 45] from a large tensor
with shape [1000, 10]
a.indices => [12, 26, 37, 45] tf.shape(a.values) => [4, 10]
b
will be the subset of a
slices at its second and third indices, so
we want to mask its first and last indices (which are at absolute
indices 12, 45)
b = tf.sparse_mask(a, [12, 45])
b.indices => [26, 37] tf.shape(b.values) => [2, 10]
```
Args:
* a
: An IndexedSlices
instance.
* mask_indices
: Indices of elements to mask.
* name
: A name for the operation (optional).
Returns:
The masked IndexedSlices
instance.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_maximum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_maximum(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_maximum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_maximum(x1, *args, **kwargs)
is equivalent to
builder.sparse_maximum(*args, **kwargs)(x1)
tensorflow.sparse_maximum
Returns the element-wise max of two SparseTensors.
Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example:
```python sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_maximum(sp_zero, sp_one).eval()
"res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]).
```
Args:
sp_a: a SparseTensor
operand whose dtype is real, and indices
lexicographically ordered.
sp_b: the other SparseTensor
operand with the same requirements (and the
same shape).
name: optional name of the operation.
Returns:
output: the output SparseTensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_merge(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_merge(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_merge
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_merge(x1, *args, **kwargs)
is equivalent to
builder.sparse_merge(*args, **kwargs)(x1)
tensorflow.sparse_merge
Combines a batch of feature ids and values into a single `SparseTensor`.
The most common use case for this function occurs when feature ids and
their corresponding values are stored in Example
protos on disk.
parse_example
will return a batch of ids and a batch of values, and this
function joins them into a single logical SparseTensor
for use in
functions such as sparse_tensor_dense_matmul
, sparse_to_dense
, etc.
The SparseTensor
returned by this function has the following properties:
indices
is equivalent tosp_ids.indices
with the last dimension discarded and replaced withsp_ids.values
.values
is simplysp_values.values
.- If
sp_ids.shape = [D0, D1, ..., Dn, K]
, thenoutput.shape = [D0, D1, ..., Dn, vocab_size]
.
For example, consider the following feature vectors:
python
vector1 = [-3, 0, 0, 0, 0, 0]
vector2 = [ 0, 1, 0, 4, 1, 0]
vector3 = [ 5, 0, 0, 9, 0, 0]
These might be stored sparsely in the following Example protos by storing only the feature ids (column number if the vectors are treated as a matrix) of the non-zero elements and the corresponding values:
python
examples = [Example(features={
"ids": Feature(int64_list=Int64List(value=[0])),
"values": Feature(float_list=FloatList(value=[-3]))}),
Example(features={
"ids": Feature(int64_list=Int64List(value=[1, 4, 3])),
"values": Feature(float_list=FloatList(value=[1, 1, 4]))}),
Example(features={
"ids": Feature(int64_list=Int64List(value=[0, 3])),
"values": Feature(float_list=FloatList(value=[5, 9]))})]
The result of calling parse_example on these examples will produce a
dictionary with entries for "ids" and "values". Passing those two objects
to this function along with vocab_size=6, will produce a SparseTensor
that
sparsely represents all three instances. Namely, the indices
property will
contain the coordinates of the non-zero entries in the feature matrix (the
first dimension is the row number in the matrix, i.e., the index within the
batch, and the second dimension is the column number, i.e., the feature id);
values
will contain the actual values. shape
will be the shape of the
original matrix, i.e., (3, 6). For our example above, the output will be
equal to:
python
SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]],
values=[-3, 1, 4, 1, 5, 9],
shape=[3, 6])
Args:
sp_ids: A SparseTensor
with values
property of type int32
or int64
.
sp_values: ASparseTensor
of any type.
vocab_size: A scalar int64
Tensor (or Python int) containing the new size
of the last dimension, all(0 <= sp_ids.values < vocab_size)
.
name: A name prefix for the returned tensors (optional)
already_sorted: A boolean to specify whether the per-batch values in
sp_values
are already sorted. If so skip sorting, False by default
(optional).
Returns:
A SparseTensor
compactly representing a batch of feature ids and values,
useful for passing to functions that expect such a SparseTensor
.
Raises:
TypeError: If sp_ids
or sp_values
are not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_minimum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_minimum(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_minimum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_minimum(x1, *args, **kwargs)
is equivalent to
builder.sparse_minimum(*args, **kwargs)(x1)
tensorflow.sparse_minimum
Returns the element-wise min of two SparseTensors.
Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example:
```python sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_minimum(sp_zero, sp_one).eval()
"res" should be equal to SparseTensor([[0], [1]], [0, 0], [7]).
```
Args:
sp_a: a SparseTensor
operand whose dtype is real, and indices
lexicographically ordered.
sp_b: the other SparseTensor
operand with the same requirements (and the
same shape).
name: optional name of the operation.
Returns:
output: the output SparseTensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_placeholder(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_placeholder(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_placeholder
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_placeholder(x1, *args, **kwargs)
is equivalent to
builder.sparse_placeholder(*args, **kwargs)(x1)
tensorflow.sparse_placeholder
Inserts a placeholder for a sparse tensor that will be always fed.
Important: This sparse tensor will produce an error if evaluated.
Its value must be fed using the feed_dict
optional argument to
Session.run()
, Tensor.eval()
, or Operation.run()
.
For example:
```python x = tf.sparse_placeholder(tf.float32) y = tf.sparse_reduce_sum(x)
with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed.
indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) values = np.array([1.0, 2.0], dtype=np.float32) shape = np.array([7, 9, 2], dtype=np.int64) print(sess.run(y, feed_dict={ x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed. print(sess.run(y, feed_dict={ x: (indices, values, shape)})) # Will succeed.
sp = tf.SparseTensor(indices=indices, values=values, shape=shape) sp_value = sp.eval(session) print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. ```
Args:
dtype: The type of values
elements in the tensor to be fed.
shape: The shape of the tensor to be fed (optional). If the shape is not
specified, you can feed a sparse tensor of any shape.
name: A name for prefixing the operations (optional).
Returns:
A SparseTensor
that may be used as a handle for feeding a value, but not
evaluated directly.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_reduce_sum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_reduce_sum(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_reduce_sum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_reduce_sum(x1, *args, **kwargs)
is equivalent to
builder.sparse_reduce_sum(*args, **kwargs)(x1)
tensorflow.sparse_reduce_sum
Computes the sum of elements across dimensions of a SparseTensor.
This Op takes a SparseTensor and is the sparse counterpart to
tf.reduce_sum()
. In particular, this Op also returns a dense Tensor
instead of a sparse one.
Reduces sp_input
along the dimensions given in reduction_axes
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
reduction_axes
. If keep_dims
is true, the reduced dimensions are retained
with length 1.
If reduction_axes
has no entries, all dimensions are reduced, and a tensor
with a single element is returned. Additionally, the axes can be negative,
similar to the indexing rules in Python.
For example:
```python
'x' represents [[1, ?, 1]
[?, 1, ?]]
where ? is implicitly-zero.
tf.sparse_reduce_sum(x) ==> 3 tf.sparse_reduce_sum(x, 0) ==> [1, 1, 1] tf.sparse_reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis. tf.sparse_reduce_sum(x, 1, keep_dims=True) ==> [[2], [1]] tf.sparse_reduce_sum(x, [0, 1]) ==> 3 ```
Args:
sp_input: The SparseTensor to reduce. Should have numeric type.
reduction_axes: The dimensions to reduce; list or scalar. If None
(the
default), reduces all dimensions.
keep_dims: If true, retain reduced dimensions with length 1.
Returns: The reduced Tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_reduce_sum_sparse(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_reduce_sum_sparse(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_reduce_sum_sparse
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_reduce_sum_sparse(x1, *args, **kwargs)
is equivalent to
builder.sparse_reduce_sum_sparse(*args, **kwargs)(x1)
tensorflow.sparse_reduce_sum_sparse
Computes the sum of elements across dimensions of a SparseTensor.
This Op takes a SparseTensor and is the sparse counterpart to
tf.reduce_sum()
. In contrast to SparseReduceSum, this Op returns a
SparseTensor.
Reduces sp_input
along the dimensions given in reduction_axes
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
reduction_axes
. If keep_dims
is true, the reduced dimensions are retained
with length 1.
If reduction_axes
has no entries, all dimensions are reduced, and a tensor
with a single element is returned. Additionally, the axes can be negative,
which are interpreted according to the indexing rules in Python.
Args:
sp_input: The SparseTensor to reduce. Should have numeric type.
reduction_axes: The dimensions to reduce; list or scalar. If None
(the
default), reduces all dimensions.
keep_dims: If true, retain reduced dimensions with length 1.
Returns: The reduced SparseTensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_reorder(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_reorder(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_reorder
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_reorder(x1, *args, **kwargs)
is equivalent to
builder.sparse_reorder(*args, **kwargs)(x1)
tensorflow.sparse_reorder
Reorders a `SparseTensor` into the canonical, row-major ordering.
Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values to add entries.
Reordering does not affect the shape of the SparseTensor
.
For example, if sp_input
has shape [4, 5]
and indices
/ values
:
[0, 3]: b [0, 1]: a [3, 1]: d [2, 0]: c
then the output will be a SparseTensor
of shape [4, 5]
and
indices
/ values
:
[0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d
Args:
sp_input: The input SparseTensor
.
name: A name prefix for the returned tensors (optional)
Returns:
A SparseTensor
with the same shape and non-empty values, but in
canonical ordering.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_reset_shape(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_reset_shape(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_reset_shape
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_reset_shape(x1, *args, **kwargs)
is equivalent to
builder.sparse_reset_shape(*args, **kwargs)(x1)
tensorflow.sparse_reset_shape
Resets the shape of a `SparseTensor` with indices and values unchanged.
If new_shape
is None, returns a copy of sp_input
with its shape reset
to the tight bounding box of sp_input
.
If new_shape
is provided, then it must be larger or equal in all dimensions
compared to the shape of sp_input
. When this condition is met, the returned
SparseTensor will have its shape reset to new_shape
and its indices and
values unchanged from that of sp_input.
For example:
Consider a sp_input
with shape [2, 3, 5]:
[0, 0, 1]: a [0, 1, 0]: b [0, 2, 2]: c [1, 0, 3]: d
-
It is an error to set
new_shape
as [3, 7] since this represents a rank-2 tensor whilesp_input
is rank-3. This is either a ValueError during graph construction (if both shapes are known) or an OpError during run time. -
Setting
new_shape
as [2, 3, 6] will be fine as this shape is larger or equal in every dimension compared to the original shape [2, 3, 5]. -
On the other hand, setting new_shape as [2, 3, 4] is also an error: The third dimension is smaller than the original shape [2, 3, 5] (and an
InvalidArgumentError
will be raised). -
If
new_shape
is None, the returned SparseTensor will have a shape [2, 3, 4], which is the tight bounding box ofsp_input
.
Args:
sp_input: The input SparseTensor
.
new_shape: None or a vector representing the new shape for the returned
SparseTensor
.
Returns:
A SparseTensor
indices and values unchanged from input_sp
. Its shape is
new_shape
if that is set. Otherwise it is the tight bounding box of
input_sp
Raises:
TypeError: If sp_input
is not a SparseTensor
.
ValueError: If new_shape
represents a tensor with a different rank from
that of sp_input
(if shapes are known when graph is constructed).
OpError:
- If new_shape
has dimension sizes that are too small.
- If shapes are not known during graph construction time, and during run
time it is found out that the ranks do not match.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_reshape(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_reshape(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_reshape
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_reshape(x1, *args, **kwargs)
is equivalent to
builder.sparse_reshape(*args, **kwargs)(x1)
tensorflow.sparse_reshape
Reshapes a `SparseTensor` to represent values in a new dense shape.
This operation has the same semantics as reshape
on the represented dense
tensor. The indices of non-empty values in sp_input
are recomputed based
on the new dense shape, and a new SparseTensor
is returned containing the
new indices and new shape. The order of non-empty values in sp_input
is
unchanged.
If one component of shape
is the special value -1, the size of that
dimension is computed so that the total dense size remains constant. At
most one component of shape
can be -1. The number of dense elements
implied by shape
must be the same as the number of dense elements
originally represented by sp_input
.
For example, if sp_input
has shape [2, 3, 6]
and indices
/ values
:
[0, 0, 0]: a [0, 0, 1]: b [0, 1, 0]: c [1, 0, 0]: d [1, 2, 3]: e
and shape
is [9, -1]
, then the output will be a SparseTensor
of
shape [9, 4]
and indices
/ values
:
[0, 0]: a [0, 1]: b [1, 2]: c [4, 2]: d [8, 1]: e
Args:
sp_input: The input SparseTensor
.
shape: A 1-D (vector) int64 Tensor
specifying the new dense shape of the
represented SparseTensor
.
name: A name prefix for the returned tensors (optional)
Returns:
A SparseTensor
with the same non-empty values but with indices calculated
by the new dense shape.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_retain(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_retain(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_retain
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_retain(x1, *args, **kwargs)
is equivalent to
builder.sparse_retain(*args, **kwargs)(x1)
tensorflow.sparse_retain
Retains specified non-empty values within a `SparseTensor`.
For example, if sp_input
has shape [4, 5]
and 4 non-empty string values:
[0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d
and to_retain = [True, False, False, True]
, then the output will
be a SparseTensor
of shape [4, 5]
with 2 non-empty values:
[0, 1]: a [3, 1]: d
Args:
sp_input: The input SparseTensor
with N
non-empty elements.
to_retain: A bool vector of length N
with M
true values.
Returns:
A SparseTensor
with the same shape as the input and M
non-empty
elements corresponding to the true positions in to_retain
.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_segment_mean(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_segment_mean(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_segment_mean
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_segment_mean(x1, *args, **kwargs)
is equivalent to
builder.sparse_segment_mean(*args, **kwargs)(x1)
tensorflow.sparse_segment_mean
Computes the mean along sparse segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Like SegmentMean
, but segment_ids
can have rank less than data
's first
dimension, selecting a subset of dimension 0, specified by indices
.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor. Has same rank as segment_ids
.
segment_ids: A Tensor
of type int32
.
A 1-D tensor. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_segment_sqrt_n(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_segment_sqrt_n(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_segment_sqrt_n
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_segment_sqrt_n(x1, *args, **kwargs)
is equivalent to
builder.sparse_segment_sqrt_n(*args, **kwargs)(x1)
tensorflow.sparse_segment_sqrt_n
Computes the sum along sparse segments of a tensor divided by the sqrt of N.
N is the size of the segment being reduced.
Read the section on Segmentation for an explanation of segments.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor. Has same rank as segment_ids
.
segment_ids: A Tensor
of type int32
.
A 1-D tensor. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_segment_sum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_segment_sum(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_segment_sum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_segment_sum(x1, *args, **kwargs)
is equivalent to
builder.sparse_segment_sum(*args, **kwargs)(x1)
tensorflow.sparse_segment_sum
Computes the sum along sparse segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Like SegmentSum
, but segment_ids
can have rank less than data
's first
dimension, selecting a subset of dimension 0, specified by indices
.
For example:
```prettyprint c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
Select two rows, one segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) ==> [[0 0 0 0]]
Select two rows, two segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) ==> [[ 1 2 3 4] [-1 -2 -3 -4]]
Select all rows, two segments.
tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) ==> [[0 0 0 0] [5 6 7 8]]
Which is equivalent to:
tf.segment_sum(c, tf.constant([0, 0, 1])) ```
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int32
, int64
, uint8
, int16
, int8
, uint16
, half
.
indices: A Tensor
. Must be one of the following types: int32
, int64
.
A 1-D tensor. Has same rank as segment_ids
.
segment_ids: A Tensor
of type int32
.
A 1-D tensor. Values should be sorted and can be repeated.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_softmax(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_softmax(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_softmax
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_softmax(x1, *args, **kwargs)
is equivalent to
builder.sparse_softmax(*args, **kwargs)(x1)
tensorflow.sparse_softmax
Applies softmax to a batched N-D `SparseTensor`.
The inputs represent an N-D SparseTensor with logical shape [..., B, C]
(where N >= 2
), and with indices sorted in the canonical lexicographic
order.
This op is equivalent to applying the normal tf.nn.softmax()
to each
innermost logical submatrix with shape [B, C]
, but with the catch that the
implicitly zero elements do not participate. Specifically, the algorithm is
equivalent to:
(1) Applies tf.nn.softmax()
to a densified view of each innermost
submatrix with shape [B, C]
, along the size-C dimension;
(2) Masks out the original implicitly-zero locations;
(3) Renormalizes the remaining elements.
Hence, the SparseTensor
result has exactly the same non-zero indices and
shape.
Example:
```python
First batch:
[? e.]
[1. ? ]
Second batch:
[e ? ]
[e e ]
shape = [2, 2, 2] # 3-D SparseTensor values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) indices = np.vstack(np.where(values)).astype(np.int64).T
result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape))
...returning a 3-D SparseTensor, equivalent to:
[? 1.] [1 ?]
[1. ? ] and [.5 .5]
where ? means implicitly zero.
```
Args:
sp_input: N-D SparseTensor
, where N >= 2
.
name: optional name of the operation.
Returns:
output: N-D SparseTensor
representing the results.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_softmax_cross_entropy_with_logits(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_softmax_cross_entropy_with_logits(*args, **kwargs)
It accepts the same arguments as tf.nn.sparse_softmax_cross_entropy_with_logits
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.sparse_softmax_cross_entropy_with_logits(x1, *args, **kwargs)
is equivalent to
builder.sparse_softmax_cross_entropy_with_logits(*args, **kwargs)(x1)
tf.nn.sparse_softmax_cross_entropy_with_logits
Computes sparse softmax cross entropy between `logits` and `labels`.
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
NOTE: For this operation, the probability of a given label is considered
exclusive. That is, soft classes are not allowed, and the labels
vector
must provide a single specific index for the true class for each row of
logits
(each minibatch entry). For soft softmax classification with
a probability distribution for each entry, see
softmax_cross_entropy_with_logits
.
WARNING: This op expects unscaled logits, since it performs a softmax
on logits
internally for efficiency. Do not call this op with the
output of softmax
, as it will produce incorrect results.
A common use case is to have logits of shape [batch_size, num_classes]
and
labels of shape [batch_size]
. But higher dimensions are supported.
Args:
logits: Unscaled log probabilities of rank r
and shape
[d_0, d_1, ..., d_{r-2}, num_classes]
and dtype float32
or float64
.
labels: Tensor
of shape [d_0, d_1, ..., d_{r-2}]
and dtype int32
or
int64
. Each entry in labels
must be an index in [0, num_classes)
.
Other values will raise an exception when this op is run on CPU, and
return NaN
for corresponding corresponding loss and gradient rows
on GPU.
name: A name for the operation (optional).
Returns:
A Tensor
of the same shape as labels
and of the same type as logits
with the softmax cross entropy loss.
Raises: ValueError: If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the labels minus one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_softmax_cross_entropy_with_logits_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_softmax_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.sparse_softmax_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sparse_softmax_cross_entropy_with_logits`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_softmax_cross_entropy_with_logits_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_softmax_cross_entropy_with_logits_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.sparse_softmax_cross_entropy_with_logits_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sparse_softmax_cross_entropy_with_logits`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_split(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_split(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_split
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_split(x1, *args, **kwargs)
is equivalent to
builder.sparse_split(*args, **kwargs)(x1)
tensorflow.sparse_split
Split a `SparseTensor` into `num_split` tensors along `split_dim`.
If the sp_input.shape[split_dim]
is not an integer multiple of num_split
each slice starting from 0:shape[split_dim] % num_split
gets extra one
dimension. For example, if split_dim = 1
and num_split = 2
and the
input is:
input_tensor = shape = [2, 7] [ a d e ] [b c ]
Graphically the output tensors are:
output_tensor[0] = [ a ] [b c ] output_tensor[1] = [ d e ] [ ]
Args:
split_dim: A 0-D int32
Tensor
. The dimension along which to split.
num_split: A Python integer. The number of ways to split.
sp_input: The SparseTensor
to split.
name: A name for the operation (optional).
Returns:
num_split
SparseTensor
objects resulting from splitting value
.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_tensor_dense_matmul(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_tensor_dense_matmul(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_tensor_dense_matmul
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_tensor_dense_matmul(x1, *args, **kwargs)
is equivalent to
builder.sparse_tensor_dense_matmul(*args, **kwargs)(x1)
tensorflow.sparse_tensor_dense_matmul
Multiply SparseTensor (of rank 2) "A" by dense matrix "B".
No validity checking is performed on the indices of A. However, the following input format is recommended for optimal behavior:
if adjoint_a == false: A should be sorted in lexicographically increasing order. Use sparse_reorder if you're not sure. if adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).
Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True):
There are a number of questions to ask in the decision process, including:
- Will the SparseTensor A fit in memory if densified?
- Is the column count of the product large (>> 1)?
- Is the density of A larger than approximately 15%?
If the answer to several of these questions is yes, consider converting the SparseTensor to a dense one and using tf.matmul with sp_a=True.
This operation tends to perform well when A is more sparse, if the column size of the product is small (e.g. matrix-vector multiplication), if sp_a.shape takes on large values.
Below is a rough speed comparison between sparse_tensor_dense_matmul, labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of the comparison, the time spent converting from a SparseTensor to a dense Tensor is not included, so it is overly conservative with respect to the time ratio.
Benchmark system: CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB GPU: NVidia Tesla k40c
Compiled with: -c opt --config=cuda --copt=-mavx
```tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks A sparse [m, k] with % nonzero values between 1% and 80% B dense [k, n]
% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) 0.01 1 True 100 100 0.000221166 0.00010154 0.459112 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 0.01 1 False 100 100 0.000208085 0.000107603 0.51711 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 0.01 10 True 100 100 0.000218522 0.000105537 0.482958 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 0.01 25 True 100 100 0.000207806 0.000105977 0.509981 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 0.01 25 True 1000 100 0.00038262 0.000141583 0.370035 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 0.01 25 False 100 100 0.000209401 0.000104696 0.499979 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 0.2 10 True 100 100 0.000211692 0.000109903 0.519165 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 0.2 10 False 100 100 0.000215727 0.000110502 0.512231 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 0.2 25 True 100 100 0.000218705 0.000129913 0.594009 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 0.2 25 False 100 100 0.000221494 0.0001306 0.589632 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 0.5 1 False 100 100 0.000224196 0.000101423 0.452386 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 0.5 10 True 100 100 0.000222125 0.000112308 0.505608 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 0.5 10 False 100 100 0.000232083 0.000114978 0.495418 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 0.5 25 True 100 100 0.00023429 0.000151703 0.647501 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 0.5 25 False 100 100 0.000228981 0.000155334 0.678371 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 0.8 1 True 100 100 0.000222037 0.000105301 0.47425 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 0.8 1 False 100 100 0.000214079 0.000107486 0.502085 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 0.8 10 True 100 100 0.000229159 0.00011825 0.516017 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 0.8 10 False 100 100 0.000230783 0.000124958 0.541452 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 0.8 25 True 100 100 0.000233496 0.000175241 0.75051 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 0.8 25 False 100 100 0.000240243 0.000175047 0.728625 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 ```
Args: sp_a: SparseTensor A, of rank 2. b: A dense Matrix with the same dtype as sp_a. adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A). adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B). name: A name prefix for the returned tensors (optional)
Returns: A dense matrix (pseudo-code in dense np.matrix notation): A = A.H if adjoint_a else A B = B.H if adjoint_b else B return A*B
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_tensor_to_dense(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_tensor_to_dense(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_tensor_to_dense
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_tensor_to_dense(x1, *args, **kwargs)
is equivalent to
builder.sparse_tensor_to_dense(*args, **kwargs)(x1)
tensorflow.sparse_tensor_to_dense
Converts a `SparseTensor` into a dense tensor.
This op is a convenience wrapper around sparse_to_dense
for SparseTensor
s.
For example, if sp_input
has shape [3, 5]
and non-empty string values:
[0, 1]: a [0, 3]: b [2, 0]: c
and default_value
is x
, then the output will be a dense [3, 5]
string tensor with values:
[[x a x b x] [x x x x x] [c x x x x]]
Indices must be without repeats. This is only tested if validate_indices is True.
Args:
sp_input: The input SparseTensor
.
default_value: Scalar value to set for indices not specified in
sp_input
. Defaults to zero.
validate_indices: A boolean value. If True
, indices are checked to make
sure they are sorted in lexicographic order and that there are no repeats.
name: A name prefix for the returned tensors (optional).
Returns:
A dense tensor with shape sp_input.shape
and values specified by
the non-empty values in sp_input
. Indices not in sp_input
are assigned
default_value
.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_to_dense(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_to_dense(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_to_dense
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_to_dense(x1, *args, **kwargs)
is equivalent to
builder.sparse_to_dense(*args, **kwargs)(x1)
tensorflow.sparse_to_dense
Converts a sparse representation into a dense tensor.
Builds an array dense
with shape output_shape
such that
```python
If sparse_indices is scalar
dense[i] = (i == sparse_indices ? sparse_values : default_value)
If sparse_indices is a vector, then for each i
dense[sparse_indices[i]] = sparse_values[i]
If sparse_indices is an n by d matrix, then for each i in [0, n)
dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] ```
All other values in dense
are set to default_value
. If sparse_values
is a scalar, all sparse indices are set to this single value.
Indices should be sorted in lexicographic order, and indices must not
contain any repeats. If validate_indices
is True, these properties
are checked during execution.
Args:
sparse_indices: A 0-D, 1-D, or 2-D Tensor
of type int32
or int64
.
sparse_indices[i]
contains the complete index where sparse_values[i]
will be placed.
output_shape: A 1-D Tensor
of the same type as sparse_indices
. Shape
of the dense output tensor.
sparse_values: A 0-D or 1-D Tensor
. Values corresponding to each row of
sparse_indices
, or a scalar value to be used for all sparse indices.
default_value: A 0-D Tensor
of the same type as sparse_values
. Value
to set for indices not specified in sparse_indices
. Defaults to zero.
validate_indices: A boolean value. If True, indices are checked to make
sure they are sorted in lexicographic order and that there are no repeats.
name: A name for the operation (optional).
Returns:
Dense Tensor
of shape output_shape
. Has the same type as
sparse_values
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_to_indicator(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_to_indicator(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_to_indicator
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_to_indicator(x1, *args, **kwargs)
is equivalent to
builder.sparse_to_indicator(*args, **kwargs)(x1)
tensorflow.sparse_to_indicator
Converts a `SparseTensor` of ids into a dense bool indicator tensor.
The last dimension of sp_input.indices
is discarded and replaced with
the values of sp_input
. If sp_input.shape = [D0, D1, ..., Dn, K]
, then
output.shape = [D0, D1, ..., Dn, vocab_size]
, where
output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True
and False elsewhere in output
.
For example, if sp_input.shape = [2, 3, 4]
with non-empty values:
[0, 0, 0]: 0 [0, 1, 0]: 10 [1, 0, 3]: 103 [1, 1, 2]: 150 [1, 1, 3]: 149 [1, 1, 4]: 150 [1, 2, 1]: 121
and vocab_size = 200
, then the output will be a [2, 3, 200]
dense bool
tensor with False everywhere except at positions
(0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150), (1, 2, 121).
Note that repeats are allowed in the input SparseTensor.
This op is useful for converting SparseTensor
s into dense formats for
compatibility with ops that expect dense tensors.
The input SparseTensor
must be in row-major order.
Args:
sp_input: A SparseTensor
with values
property of type int32
or
int64
.
vocab_size: A scalar int64 Tensor (or Python int) containing the new size
of the last dimension, all(0 <= sp_input.values < vocab_size)
.
name: A name prefix for the returned tensors (optional)
Returns: A dense bool indicator tensor representing the indices with specified value.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sparse_transpose(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sparse_transpose(*args, **kwargs)
It accepts the same arguments as tensorflow.sparse_transpose
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sparse_transpose(x1, *args, **kwargs)
is equivalent to
builder.sparse_transpose(*args, **kwargs)(x1)
tensorflow.sparse_transpose
Transposes a `SparseTensor`
The returned tensor's dimension i will correspond to the input dimension
perm[i]
. If perm
is not given, it is set to (n-1...0), where n is
the rank of the input tensor. Hence by default, this operation performs a
regular matrix transpose on 2-D input Tensors.
For example, if sp_input
has shape [4, 5]
and indices
/ values
:
[0, 3]: b [0, 1]: a [3, 1]: d [2, 0]: c
then the output will be a SparseTensor
of shape [5, 4]
and
indices
/ values
:
[0, 2]: c [1, 0]: a [1, 3]: d [3, 0]: b
Args:
sp_input: The input SparseTensor
.
perm: A permutation of the dimensions of sp_input
.
name: A name prefix for the returned tensors (optional)
Returns:
A transposed SparseTensor
.
Raises:
TypeError: If sp_input
is not a SparseTensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def split(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.split(*args, **kwargs)
It accepts the same arguments as tensorflow.split
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.split(x1, *args, **kwargs)
is equivalent to
builder.split(*args, **kwargs)(x1)
tensorflow.split
Splits a tensor into `num_split` tensors along one dimension.
Splits value
along dimension split_dim
into num_split
smaller tensors.
Requires that num_split
evenly divide value.shape[split_dim]
.
For example:
```python
'value' is a tensor with shape [5, 30]
Split 'value' into 3 tensors along dimension 1
split0, split1, split2 = tf.split(1, 3, value) tf.shape(split0) ==> [5, 10] ```
Note: If you are splitting along an axis by the length of that axis, consider using unpack, e.g.
python
num_items = t.get_shape()[axis].value
[tf.squeeze(s, [axis]) for s in tf.split(axis, num_items, t)]
can be rewritten as
python
tf.unpack(t, axis=axis)
Args:
split_dim: A 0-D int32
Tensor
. The dimension along which to split.
Must be in the range [0, rank(value))
.
num_split: A Python integer. The number of ways to split.
value: The Tensor
to split.
name: A name for the operation (optional).
Returns:
num_split
Tensor
objects resulting from splitting value
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sqrt(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sqrt(*args, **kwargs)
It accepts the same arguments as tensorflow.sqrt
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sqrt(x1, *args, **kwargs)
is equivalent to
builder.sqrt(*args, **kwargs)(x1)
tensorflow.sqrt
Computes square root of x element-wise.
I.e., (y = \sqrt{x} = x^{1/2}).
Args:
x: A Tensor
or SparseTensor
. Must be one of the following types: half
,
float32
, float64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
, respectively. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def square(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.square(*args, **kwargs)
It accepts the same arguments as tensorflow.square
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.square(x1, *args, **kwargs)
is equivalent to
builder.square(*args, **kwargs)(x1)
tensorflow.square
Computes square of x element-wise.
I.e., (y = x * x = x^2).
Args:
x: A Tensor
or SparseTensor
. Must be one of the following types: half
,
float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def squared_difference(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.squared_difference(*args, **kwargs)
It accepts the same arguments as tensorflow.squared_difference
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.squared_difference(x1, *args, **kwargs)
is equivalent to
builder.squared_difference(*args, **kwargs)(x1)
tensorflow.squared_difference
Returns (x - y)(x - y) element-wise.
NOTE: SquaredDifference
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def squeeze(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.squeeze(*args, **kwargs)
It accepts the same arguments as tensorflow.squeeze
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.squeeze(x1, *args, **kwargs)
is equivalent to
builder.squeeze(*args, **kwargs)(x1)
tensorflow.squeeze
Removes dimensions of size 1 from the shape of a tensor.
Given a tensor input
, this operation returns a tensor of the same type with
all dimensions of size 1 removed. If you don't want to remove all size 1
dimensions, you can remove specific size 1 dimensions by specifying
squeeze_dims
.
For example:
```prettyprint
't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t)) ==> [2, 3] ```
Or, to remove specific size 1 dimensions:
```prettyprint
't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] ```
Args:
input: A Tensor
. The input
to squeeze.
squeeze_dims: An optional list of ints
. Defaults to []
.
If specified, only squeezes the dimensions listed. The dimension
index starts at 0. It is an error to squeeze a dimension that is not 1.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Contains the same data as input
, but has one or more dimensions of
size 1 removed.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def state_saving_rnn(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.state_saving_rnn(*args, **kwargs)
It accepts the same arguments as tf.nn.state_saving_rnn
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.state_saving_rnn(x1, *args, **kwargs)
is equivalent to
builder.state_saving_rnn(*args, **kwargs)(x1)
tf.nn.state_saving_rnn
RNN that accepts a state saver for time-truncated RNN calculation.
Args:
cell: An instance of RNNCell
.
inputs: A length T list of inputs, each a Tensor
of shape
[batch_size, input_size]
.
state_saver: A state saver object with methods state
and save_state
.
state_name: Python string or tuple of strings. The name to use with the
state_saver. If the cell returns tuples of states (i.e.,
cell.state_size
is a tuple) then state_name
should be a tuple of
strings having the same length as cell.state_size
. Otherwise it should
be a single string.
sequence_length: (optional) An int32/int64 vector size [batch_size].
See the documentation for rnn() for more details about sequence_length.
scope: VariableScope for the created subgraph; defaults to "RNN".
Returns: A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state
Raises:
TypeError: If cell
is not an instance of RNNCell.
ValueError: If inputs
is None
or an empty list, or if the arity and
type of state_name
does not match that of cell.state_size
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def state_saving_rnn_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.state_saving_rnn_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.state_saving_rnn_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.state_saving_rnn`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def state_saving_rnn_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.state_saving_rnn_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.state_saving_rnn_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.state_saving_rnn`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def stop_gradient(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.stop_gradient(*args, **kwargs)
It accepts the same arguments as tensorflow.stop_gradient
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.stop_gradient(x1, *args, **kwargs)
is equivalent to
builder.stop_gradient(*args, **kwargs)(x1)
tensorflow.stop_gradient
Stops gradient computation.
When executed in a graph, this op outputs its input tensor as-is.
When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.
This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:
- The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
- Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
- Adversarial training, where no backprop should happen through the adversarial example generation process.
Args:
input: A Tensor
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def strided_slice(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.strided_slice(*args, **kwargs)
It accepts the same arguments as tensorflow.strided_slice
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.strided_slice(x1, *args, **kwargs)
is equivalent to
builder.strided_slice(*args, **kwargs)(x1)
tensorflow.strided_slice
Extracts a strided slice from a tensor.
To a first order, this operation extracts a slice of size end - begin
from a tensor input
starting at the location specified by begin
. The slice continues by adding
stride
to the begin
index until all dimensions are not less than end
.
Note that components of stride can be negative, which causes a reverse
slice.
This operation can be thought of an encoding of a numpy style sliced
range. Given a python slice input[
begin
, end
, and strides
will be all length n. n is in general
not the same dimensionality as input
.
For the ith spec,
begin_mask
, end_mask
, ellipsis_mask
, new_axis_mask
,
and shrink_axis_mask
will have the ith bit corresponding to
the ith spec.
If the ith bit of begin_mask
is non-zero, begin[i]
is ignored and
the fullest possible range in that dimension is used instead.
end_mask
works analogously, except with the end range.
foo[5:,:,:3]
on a 7x8x9 tensor is equivalent to foo[5:7,0:8,0:3]
.
foo[::-1]
reverses a tensor with shape 8.
If the ith bit of ellipsis_mask
, as many unspecified dimensions
as needed will be inserted between other dimensions. Only one
non-zero bit is allowed in ellipsis_mask
.
For example foo[3:5,...,4:5]
on a shape 10x3x3x10 tensor is
equivalent to foo[3:5,:,:,4:5]
and
foo[3:5,...]
is equivalent to foo[3:5,:,:,:]
.
If the ith bit of new_axis_mask
is one, then a begin
,
end
, and stride
are ignored and a new length 1 dimension is
added at this point in the output tensor.
For example foo[3:5,4]
on a 10x8 tensor produces a shape 2 tensor
whereas foo[3:5,4:5]
produces a shape 2x1 tensor with shrink_mask
being 1<<1 == 2.
If the ith bit of shrink_axis_mask
is one, then begin
,
end[i]
, and stride[i]
are used to do a slice in the appropriate
dimension, but the output tensor will be reduced in dimensionality
by one. This is only valid if the ith entry of slice[i]==1.
NOTE: begin
and end
are zero-indexed.
strides` entries must be non-zero.
```
'input' is [[[1, 1, 1], [2, 2, 2]],
[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]]
tf.slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4], [3, 3, 3]]] ```
Args:
input_: A Tensor
.
begin: An int32
or int64
Tensor
.
end: An int32
or int64
Tensor
.
strides: An int32
or int64
Tensor
.
begin_mask: An int32
mask.
end_mask: An int32
mask.
ellipsis_mask: An int32
mask.
new_axis_mask: An int32
mask.
shrink_axis_mask: An int32
mask.
var: The variable coresponding to input_
or None
name: A name for the operation (optional).
Returns:
A Tensor
the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def string_join(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.string_join(*args, **kwargs)
It accepts the same arguments as tensorflow.string_join
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.string_join(x1, *args, **kwargs)
is equivalent to
builder.string_join(*args, **kwargs)(x1)
tensorflow.string_join
Joins the strings in the given list of string tensors into one tensor;
with the given separator (default is an empty separator).
Args:
inputs: A list of at least 1 Tensor
objects of type string
.
A list of string tensors. The tensors must all have the same shape,
or be scalars. Scalars may be mixed in; these will be broadcast to the shape
of non-scalar inputs.
separator: An optional string
. Defaults to ""
.
string, an optional join separator.
name: A name for the operation (optional).
Returns:
A Tensor
of type string
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def string_split(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.string_split(*args, **kwargs)
It accepts the same arguments as tensorflow.string_split
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.string_split(x1, *args, **kwargs)
is equivalent to
builder.string_split(*args, **kwargs)(x1)
tensorflow.string_split
Split elements of `source` based on `delimiter` into a `SparseTensor`.
Let N be the size of source (typically N will be the batch size). Split each
element of source
based on delimiter
and return a SparseTensor
containing the splitted tokens. Empty tokens are ignored.
If delimiter
is an empty string, each element of the source
is split
into individual 1 character strings.
For example: N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output will be
st.indices = [0, 0; 0, 1; 1, 0; 1, 1; 1, 2] st.shape = [2, 3] st.values = ['hello', 'world', 'a', 'b', 'c']
Args:
source: 1-D
string Tensor
, the strings to split.
delimiter: 0-D
string Tensor
, the delimiter character, the string should
be length 0 or 1.
Returns:
A SparseTensor
of rank 2
, the strings split according to the delimiter.
The first column of the indices corresponds to the row in source
and the
second column corresponds to the index of the split component in this row.
Raises: ValueError: If delimiter is not a character.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def string_to_hash_bucket(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.string_to_hash_bucket(*args, **kwargs)
It accepts the same arguments as tensorflow.string_to_hash_bucket
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.string_to_hash_bucket(x1, *args, **kwargs)
is equivalent to
builder.string_to_hash_bucket(*args, **kwargs)(x1)
tensorflow.string_to_hash_bucket
Converts each string in the input Tensor to its hash mod by a number of buckets.
The hash function is deterministic on the content of the string within the process.
Note that the hash function may change from time to time.
This functionality will be deprecated and it's recommended to use
tf.string_to_hash_bucket_fast()
or tf.string_to_hash_bucket_strong()
.
Args:
string_tensor: A Tensor
of type string
.
num_buckets: An int
that is >= 1
. The number of buckets.
name: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
A Tensor of the same shape as the input string_tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def string_to_hash_bucket_fast(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.string_to_hash_bucket_fast(*args, **kwargs)
It accepts the same arguments as tensorflow.string_to_hash_bucket_fast
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.string_to_hash_bucket_fast(x1, *args, **kwargs)
is equivalent to
builder.string_to_hash_bucket_fast(*args, **kwargs)(x1)
tensorflow.string_to_hash_bucket_fast
Converts each string in the input Tensor to its hash mod by a number of buckets.
The hash function is deterministic on the content of the string within the
process and will never change. However, it is not suitable for cryptography.
This function may be used when CPU time is scarce and inputs are trusted or
unimportant. There is a risk of adversaries constructing inputs that all hash
to the same bucket. To prevent this problem, use a strong hash function with
tf.string_to_hash_bucket_strong
.
Args:
input: A Tensor
of type string
. The strings to assign a hash bucket.
num_buckets: An int
that is >= 1
. The number of buckets.
name: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
A Tensor of the same shape as the input string_tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def string_to_hash_bucket_strong(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.string_to_hash_bucket_strong(*args, **kwargs)
It accepts the same arguments as tensorflow.string_to_hash_bucket_strong
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.string_to_hash_bucket_strong(x1, *args, **kwargs)
is equivalent to
builder.string_to_hash_bucket_strong(*args, **kwargs)(x1)
tensorflow.string_to_hash_bucket_strong
Converts each string in the input Tensor to its hash mod by a number of buckets.
The hash function is deterministic on the content of the string within the
process. The hash function is a keyed hash function, where attribute key
defines the key of the hash function. key
is an array of 2 elements.
A strong hash is important when inputs may be malicious, e.g. URLs with additional components. Adversaries could try to make their inputs hash to the same bucket for a denial-of-service attack or to skew the results. A strong hash prevents this by making it dificult, if not infeasible, to compute inputs that hash to the same bucket. This comes at a cost of roughly 4x higher compute time than tf.string_to_hash_bucket_fast.
Args:
input: A Tensor
of type string
. The strings to assign a hash bucket.
num_buckets: An int
that is >= 1
. The number of buckets.
key: A list of ints
.
The key for the keyed hash function passed as a list of two uint64
elements.
name: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
A Tensor of the same shape as the input string_tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def string_to_number(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.string_to_number(*args, **kwargs)
It accepts the same arguments as tensorflow.string_to_number
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.string_to_number(x1, *args, **kwargs)
is equivalent to
builder.string_to_number(*args, **kwargs)(x1)
tensorflow.string_to_number
Converts each string in the input Tensor to the specified numeric type.
(Note that int32 overflow results in an error while float overflow results in a rounded value.)
Args:
string_tensor: A Tensor
of type string
.
out_type: An optional tf.DType
from: tf.float32, tf.int32
. Defaults to tf.float32
.
The numeric type to interpret each string in string_tensor as.
name: A name for the operation (optional).
Returns:
A Tensor
of type out_type
.
A Tensor of the same shape as the input string_tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sub(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sub(*args, **kwargs)
It accepts the same arguments as tensorflow.sub
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.sub(x1, *args, **kwargs)
is equivalent to
builder.sub(*args, **kwargs)(x1)
tensorflow.sub
Returns x - y element-wise.
NOTE: Sub
supports broadcasting. More about broadcasting
here
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
y: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sufficient_statistics(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sufficient_statistics(*args, **kwargs)
It accepts the same arguments as tf.nn.sufficient_statistics
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.sufficient_statistics(x1, *args, **kwargs)
is equivalent to
builder.sufficient_statistics(*args, **kwargs)(x1)
tf.nn.sufficient_statistics
Calculate the sufficient statistics for the mean and variance of `x`.
These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data
Args:
x: A Tensor
.
axes: Array of ints. Axes along which to compute mean and variance.
shift: A Tensor
containing the value by which to shift the data for
numerical stability, or None
if no shift is to be performed. A shift
close to the true mean provides the most numerically stable results.
keep_dims: produce statistics with the same dimensionality as the input.
name: Name used to scope the operations that compute the sufficient stats.
Returns:
Four Tensor
objects of the same type as x
:
* the count (number of elements to average over).
* the (possibly shifted) sum of the elements in the array.
* the (possibly shifted) sum of squares of the elements in the array.
* the shift by which the mean must be corrected or None if shift
is None.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sufficient_statistics_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sufficient_statistics_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.sufficient_statistics_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sufficient_statistics`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def sufficient_statistics_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.sufficient_statistics_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.sufficient_statistics_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.sufficient_statistics`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def svd(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.svd(*args, **kwargs)
It accepts the same arguments as tensorflow.svd
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.svd(x1, *args, **kwargs)
is equivalent to
builder.svd(*args, **kwargs)(x1)
tensorflow.svd
Computes the singular value decompositions of one or more matrices.
Computes the SVD of each inner matrix in tensor
such that
tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
:])
```prettyprint
a is a tensor.
s is a tensor of singular values.
u is a tensor of left singular vectors.
v is a tensor of right singular vectors.
s, u, v = svd(a) s = svd(a, compute_uv=False) ```
Args:
matrix: Tensor
of shape [..., M, N]
. Let P
be the minimum of M
and
N
.
compute_uv: If True
then left and right singular vectors will be
computed and returned in u
and v
, respectively. Otherwise, only the
singular values will be computed, which can be significantly faster.
full_matrices: If true, compute full-sized u
and v
. If false
(the default), compute only the leading P
singular vectors.
Ignored if compute_uv
is False
.
name: string, optional name of the operation.
Returns:
s: Singular values. Shape is [..., P]
.
u: Right singular vectors. If full_matrices
is False
(default) then
shape is [..., M, P]
; if full_matrices
is True
then shape is
[..., M, M]
. Not returned if compute_uv
is False
.
v: Left singular vectors. If full_matrices
is False
(default) then
shape is [..., N, P]
. If full_matrices
is True
then shape is
[..., N, N]
. Not returned if compute_uv
is False
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def tan(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.tan(*args, **kwargs)
It accepts the same arguments as tensorflow.tan
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.tan(x1, *args, **kwargs)
is equivalent to
builder.tan(*args, **kwargs)(x1)
tensorflow.tan
Computes tan of x element-wise.
Args:
x: A Tensor
. Must be one of the following types: half
, float32
, float64
, int32
, int64
, complex64
, complex128
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def tanh(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.tanh(*args, **kwargs)
It accepts the same arguments as tf.nn.tanh
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.tanh(x1, *args, **kwargs)
is equivalent to
builder.tanh(*args, **kwargs)(x1)
tf.nn.tanh
Computes hyperbolic tangent of `x` element-wise.
Args:
x: A Tensor or SparseTensor with type float
, double
, int32
,
complex64
, int64
, or qint32
.
name: A name for the operation (optional).
Returns:
A Tensor or SparseTensor respectively with the same type as x
if
x.dtype != qint32
otherwise the return type is quint8
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def tanh_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.tanh_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.tanh_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.tanh`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def tanh_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.tanh_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.tanh_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.tanh`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def tile(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.tile(*args, **kwargs)
It accepts the same arguments as tensorflow.tile
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.tile(x1, *args, **kwargs)
is equivalent to
builder.tile(*args, **kwargs)(x1)
tensorflow.tile
Constructs a tensor by tiling a given tensor.
This operation creates a new tensor by replicating input
multiples
times.
The output tensor's i'th dimension has input.dims(i) * multiples[i]
elements,
and the values of input
are replicated multiples[i]
times along the 'i'th
dimension. For example, tiling [a b c d]
by [2]
produces
[a b c d a b c d]
.
Args:
input: A Tensor
. 1-D or higher.
multiples: A Tensor
. Must be one of the following types: int32
, int64
.
1-D. Length must be the same as the number of dimensions in input
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def to_bfloat16(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.to_bfloat16(*args, **kwargs)
It accepts the same arguments as tensorflow.to_bfloat16
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.to_bfloat16(x1, *args, **kwargs)
is equivalent to
builder.to_bfloat16(*args, **kwargs)(x1)
tensorflow.to_bfloat16
Casts a tensor to type `bfloat16`.
Args:
x: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
with same shape as x
with type bfloat16
.
Raises:
TypeError: If x
cannot be cast to the bfloat16
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def to_double(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.to_double(*args, **kwargs)
It accepts the same arguments as tensorflow.to_double
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.to_double(x1, *args, **kwargs)
is equivalent to
builder.to_double(*args, **kwargs)(x1)
tensorflow.to_double
Casts a tensor to type `float64`.
Args:
x: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
with same shape as x
with type float64
.
Raises:
TypeError: If x
cannot be cast to the float64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def to_float(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.to_float(*args, **kwargs)
It accepts the same arguments as tensorflow.to_float
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.to_float(x1, *args, **kwargs)
is equivalent to
builder.to_float(*args, **kwargs)(x1)
tensorflow.to_float
Casts a tensor to type `float32`.
Args:
x: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
with same shape as x
with type float32
.
Raises:
TypeError: If x
cannot be cast to the float32
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def to_int32(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.to_int32(*args, **kwargs)
It accepts the same arguments as tensorflow.to_int32
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.to_int32(x1, *args, **kwargs)
is equivalent to
builder.to_int32(*args, **kwargs)(x1)
tensorflow.to_int32
Casts a tensor to type `int32`.
Args:
x: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
with same shape as x
with type int32
.
Raises:
TypeError: If x
cannot be cast to the int32
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def to_int64(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.to_int64(*args, **kwargs)
It accepts the same arguments as tensorflow.to_int64
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.to_int64(x1, *args, **kwargs)
is equivalent to
builder.to_int64(*args, **kwargs)(x1)
tensorflow.to_int64
Casts a tensor to type `int64`.
Args:
x: A Tensor
or SparseTensor
.
name: A name for the operation (optional).
Returns:
A Tensor
or SparseTensor
with same shape as x
with type int64
.
Raises:
TypeError: If x
cannot be cast to the int64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def top_k(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.top_k(*args, **kwargs)
It accepts the same arguments as tf.nn.top_k
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.top_k(x1, *args, **kwargs)
is equivalent to
builder.top_k(*args, **kwargs)(x1)
tf.nn.top_k
Finds values and indices of the `k` largest entries for the last dimension.
If the input is a vector (rank-1), finds the k
largest entries in the vector
and outputs their values and indices as vectors. Thus values[j]
is the
j
-th largest entry in input
, and its index is indices[j]
.
For matrices (resp. higher rank input), computes the top k
entries in each
row (resp. vector along the last dimension). Thus,
values.shape = indices.shape = input.shape[:-1] + [k]
If two elements are equal, the lower-index element appears first.
Args:
input: 1-D or higher Tensor
with last dimension at least k
.
k: 0-D int32
Tensor
. Number of top elements to look for along the last
dimension (along each row for matrices).
sorted: If true the resulting k
elements will be sorted by the values in
descending order.
name: Optional name for the operation.
Returns:
values: The k
largest elements along each last dimensional slice.
indices: The indices of values
within the last dimension of input
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def top_k_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.top_k_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.top_k_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.top_k`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def top_k_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.top_k_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.top_k_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.top_k`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def trace(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.trace(*args, **kwargs)
It accepts the same arguments as tensorflow.trace
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.trace(x1, *args, **kwargs)
is equivalent to
builder.trace(*args, **kwargs)(x1)
tensorflow.trace
Compute the trace of a tensor `x`.
trace(x)
returns the sum of along the diagonal.
For example:
```python
'x' is [[1, 1],
[1, 1]]
tf.trace(x) ==> 2
'x' is [[1,2,3],
[4,5,6],
[7,8,9]]
tf.trace(x) ==> 15 ```
Args: x: 2-D tensor. name: A name for the operation (optional).
Returns: The trace of input tensor.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def trainable_variables(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.trainable_variables(*args, **kwargs)
It accepts the same arguments as tensorflow.trainable_variables
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.trainable_variables(x1, *args, **kwargs)
is equivalent to
builder.trainable_variables(*args, **kwargs)(x1)
tensorflow.trainable_variables
Returns all variables created with `trainable=True`.
When passed trainable=True
, the Variable()
constructor automatically
adds new variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
. This convenience function returns the
contents of that collection.
Returns: A list of Variable objects.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def transpose(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.transpose(*args, **kwargs)
It accepts the same arguments as tensorflow.transpose
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.transpose(x1, *args, **kwargs)
is equivalent to
builder.transpose(*args, **kwargs)(x1)
tensorflow.transpose
Transposes `a`. Permutes the dimensions according to `perm`.
The returned tensor's dimension i will correspond to the input dimension
perm[i]
. If perm
is not given, it is set to (n-1...0), where n is
the rank of the input tensor. Hence by default, this operation performs a
regular matrix transpose on 2-D input Tensors.
For example:
```python
'x' is [[1 2 3]
[4 5 6]]
tf.transpose(x) ==> [[1 4] [2 5] [3 6]]
Equivalently
tf.transpose(x, perm=[1, 0]) ==> [[1 4] [2 5] [3 6]]
'perm' is more useful for n-dimensional tensors, for n > 2
'x' is [[[1 2 3]
[4 5 6]]
[[7 8 9]
[10 11 12]]]
Take the transpose of the matrices in dimension-0
tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4] [2 5] [3 6]]
[[7 10] [8 11] [9 12]]]
```
Args:
a: A Tensor
.
perm: A permutation of the dimensions of a
.
name: A name for the operation (optional).
Returns:
A transposed Tensor
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def truediv(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.truediv(*args, **kwargs)
It accepts the same arguments as tensorflow.truediv
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.truediv(x1, *args, **kwargs)
is equivalent to
builder.truediv(*args, **kwargs)(x1)
tensorflow.truediv
Divides x / y elementwise, always producing floating point results.
The same as tf.div
for floating point arguments, but casts integer arguments
to floating point before dividing so that the result is always floating point.
This op is generated by normal x / y
division in Python 3 and in Python 2.7
with from __future__ import division
. If you want integer division that
rounds down, use x // y
or tf.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args:
x: Tensor
numerator of numeric type.
y: Tensor
denominator of numeric type.
name: A name for the operation (optional).
Returns:
x / y
evaluated in floating point.
Raises:
TypeError: If x
and y
have different dtypes.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def truncated_normal(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.truncated_normal(*args, **kwargs)
It accepts the same arguments as tensorflow.truncated_normal
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.truncated_normal(x1, *args, **kwargs)
is equivalent to
builder.truncated_normal(*args, **kwargs)(x1)
tensorflow.truncated_normal
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype
. The mean of the
truncated normal distribution.
stddev: A 0-D Tensor or Python value of type dtype
. The standard deviation
of the truncated normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution.
See
set_random_seed
for behavior.
name: A name for the operation (optional).
Returns: A tensor of the specified shape filled with random truncated normal values.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def truncated_normal_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.truncated_normal_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.truncated_normal_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.truncated_normal_initializer(x1, *args, **kwargs)
is equivalent to
builder.truncated_normal_initializer(*args, **kwargs)(x1)
tensorflow.truncated_normal_initializer
Returns an initializer that generates a truncated normal distribution.
These values are similar to values from a random_normal_initializer
except that values more than two standard deviations from the mean
are discarded and re-drawn. This is the recommended initializer for
neural network weights and filters.
Args:
mean: a python scalar or a scalar tensor. Mean of the random values
to generate.
stddev: a python scalar or a scalar tensor. Standard deviation of the
random values to generate.
seed: A Python integer. Used to create random seeds. See
set_random_seed
for behavior.
dtype: The data type. Only floating point types are supported.
Returns: An initializer that generates tensors with a truncated normal distribution.
Raises:
ValueError: if dtype
is not a floating point type.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def tuple(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.tuple(*args, **kwargs)
It accepts the same arguments as tensorflow.tuple
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.tuple(x1, *args, **kwargs)
is equivalent to
builder.tuple(*args, **kwargs)(x1)
tensorflow.tuple
Group tensors together.
This creates a tuple of tensors with the same values as the tensors
argument, except that the value of each tensor is only returned after the
values of all tensors have been computed.
control_inputs
contains additional ops that have to finish before this op
finishes, but whose outputs are not returned.
This can be used as a "join" mechanism for parallel computations: all the
argument tensors can be computed in parallel, but the values of any tensor
returned by tuple
are only available after all the parallel computations
are done.
See also group
and with_dependencies
.
Args:
tensors: A list of Tensor
s or IndexedSlices
, some entries can be None
.
name: (optional) A name to use as a name_scope
for the operation.
control_inputs: List of additional ops to finish before returning.
Returns:
Same as tensors
.
Raises:
ValueError: If tensors
does not contain any Tensor
or IndexedSlices
.
TypeError: If control_inputs
is not a list of Operation
or Tensor
objects.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def uniform_candidate_sampler(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.uniform_candidate_sampler(*args, **kwargs)
It accepts the same arguments as tf.nn.uniform_candidate_sampler
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.uniform_candidate_sampler(x1, *args, **kwargs)
is equivalent to
builder.uniform_candidate_sampler(*args, **kwargs)(x1)
tf.nn.uniform_candidate_sampler
Samples a set of classes using a uniform base distribution.
This operation randomly samples a tensor of sampled classes
(sampled_candidates
) from the range of integers [0, range_max)
.
The elements of sampled_candidates
are drawn without replacement
(if unique=True
) or with replacement (if unique=False
) from
the base distribution.
The base distribution for this operation is the uniform distribution
over the range of integers [0, range_max)
.
In addition, this operation returns tensors true_expected_count
and sampled_expected_count
representing the number of times each
of the target classes (true_classes
) and the sampled
classes (sampled_candidates
) is expected to occur in an average
tensor of sampled classes. These values correspond to Q(y|x)
defined in this
document.
If unique=True
, then these are post-rejection probabilities and we
compute them approximately.
Args:
true_classes: A Tensor
of type int64
and shape [batch_size,
num_true]
. The target classes.
num_true: An int
. The number of target classes per training example.
num_sampled: An int
. The number of classes to randomly sample per batch.
unique: A bool
. Determines whether all sampled classes in a batch are
unique.
range_max: An int
. The number of possible classes.
seed: An int
. An operation-specific seed. Default is 0.
name: A name for the operation (optional).
Returns:
sampled_candidates: A tensor of type int64
and shape [num_sampled]
.
The sampled classes.
true_expected_count: A tensor of type float
. Same shape as
true_classes
. The expected counts under the sampling distribution
of each of true_classes
.
sampled_expected_count: A tensor of type float
. Same shape as
sampled_candidates
. The expected counts under the sampling distribution
of each of sampled_candidates
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def uniform_candidate_sampler_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.uniform_candidate_sampler_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.uniform_candidate_sampler_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.uniform_candidate_sampler`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def uniform_candidate_sampler_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.uniform_candidate_sampler_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.uniform_candidate_sampler_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.uniform_candidate_sampler`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def uniform_unit_scaling_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.uniform_unit_scaling_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.uniform_unit_scaling_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.uniform_unit_scaling_initializer(x1, *args, **kwargs)
is equivalent to
builder.uniform_unit_scaling_initializer(*args, **kwargs)(x1)
tensorflow.uniform_unit_scaling_initializer
Returns an initializer that generates tensors without scaling variance.
When initializing a deep network, it is in principle advantageous to keep
the scale of the input variance constant, so it does not explode or diminish
by reaching the final layer. If the input is x
and the operation x * W
,
and we want to initialize W
uniformly at random, we need to pick W
from
[-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]
to keep the scale intact, where dim = W.shape[0]
(the size of the input).
A similar calculation for convolutional networks gives an analogous result
with dim
equal to the product of the first 3 dimensions. When
nonlinearities are present, we need to multiply this by a constant factor
.
See Sussillo et al., 2014
(pdf) for deeper motivation, experiments
and the calculation of constants. In section 2.3 there, the constants were
numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.
Args:
factor: Float. A multiplicative factor by which the values will be scaled.
seed: A Python integer. Used to create random seeds. See
set_random_seed
for behavior.
dtype: The data type. Only floating point types are supported.
Returns: An initializer that generates tensors with unit variance.
Raises:
ValueError: if dtype
is not a floating point type.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def unique(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.unique(*args, **kwargs)
It accepts the same arguments as tensorflow.unique
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.unique(x1, *args, **kwargs)
is equivalent to
builder.unique(*args, **kwargs)(x1)
tensorflow.unique
Finds unique elements in a 1-D tensor.
This operation returns a tensor y
containing all of the unique elements of x
sorted in the same order that they occur in x
. This operation also returns a
tensor idx
the same size as x
that contains the index of each value of x
in the unique output y
. In other words:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
```prettyprint
tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] ```
Args:
x: A Tensor
. 1-D.
out_idx: An optional tf.DType
from: tf.int32, tf.int64
. Defaults to tf.int32
.
name: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (y, idx).
y: A Tensor
. Has the same type as x
. 1-D.
idx: A Tensor
of type out_idx
. 1-D.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def unique_with_counts(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.unique_with_counts(*args, **kwargs)
It accepts the same arguments as tensorflow.unique_with_counts
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.unique_with_counts(x1, *args, **kwargs)
is equivalent to
builder.unique_with_counts(*args, **kwargs)(x1)
tensorflow.unique_with_counts
Finds unique elements in a 1-D tensor.
This operation returns a tensor y
containing all of the unique elements of x
sorted in the same order that they occur in x
. This operation also returns a
tensor idx
the same size as x
that contains the index of each value of x
in the unique output y
. Finally, it returns a third tensor count
that
contains the count of each element of y
in x
. In other words:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
```prettyprint
tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx, count = unique_with_counts(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2] ```
Args:
x: A Tensor
. 1-D.
out_idx: An optional tf.DType
from: tf.int32, tf.int64
. Defaults to tf.int32
.
name: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (y, idx, count).
y: A Tensor
. Has the same type as x
. 1-D.
idx: A Tensor
of type out_idx
. 1-D.
count: A Tensor
of type out_idx
. 1-D.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def unpack(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.unpack(*args, **kwargs)
It accepts the same arguments as tensorflow.unpack
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.unpack(x1, *args, **kwargs)
is equivalent to
builder.unpack(*args, **kwargs)(x1)
tensorflow.unpack
Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.
Unpacks num
tensors from value
by chipping it along the axis
dimension.
If num
is not specified (the default), it is inferred from value
's shape.
If value.shape[axis]
is not known, ValueError
is raised.
For example, given a tensor of shape (A, B, C, D)
;
If axis == 0
then the i'th tensor in output
is the slice
value[i, :, :, :]
and each tensor in output
will have shape (B, C, D)
.
(Note that the dimension unpacked along is gone, unlike split
).
If axis == 1
then the i'th tensor in output
is the slice
value[:, i, :, :]
and each tensor in output
will have shape (A, C, D)
.
Etc.
This is the opposite of pack. The numpy equivalent is
tf.unpack(x, n) = list(x)
Args:
value: A rank R > 0
Tensor
to be unpacked.
num: An int
. The length of the dimension axis
. Automatically inferred
if None
(the default).
axis: An int
. The axis to unpack along. Defaults to the first
dimension. Supports negative indexes.
name: A name for the operation (optional).
Returns:
The list of Tensor
objects unpacked from value
.
Raises:
ValueError: If num
is unspecified and cannot be inferred.
ValueError: If axis
is out of the range [-R, R).
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def unsorted_segment_sum(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.unsorted_segment_sum(*args, **kwargs)
It accepts the same arguments as tensorflow.unsorted_segment_sum
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.unsorted_segment_sum(x1, *args, **kwargs)
is equivalent to
builder.unsorted_segment_sum(*args, **kwargs)(x1)
tensorflow.unsorted_segment_sum
Computes the sum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
(output[i] = sum_{j...} data[j...]
where the sum is over tuples j...
such
that segment_ids[j...] == i
. Unlike SegmentSum
, segment_ids
need not be sorted and need not cover all values in the full
range of valid values.
If the sum is empty for a given segment ID i
, output[i] = 0
.
num_segments
should equal the number of distinct segment IDs.
Args:
data: A Tensor
. Must be one of the following types: float32
, float64
, int64
, int32
, uint8
, uint16
, int16
, int8
, complex64
, complex128
, qint8
, quint8
, qint32
, half
.
segment_ids: A Tensor
. Must be one of the following types: int32
, int64
.
A tensor whose shape is a prefix of data.shape
.
num_segments: A Tensor
of type int32
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for the first segment_ids.rank
dimensions, which are replaced with a single dimension which has size
num_segments
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def variable_axis_size_partitioner(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.variable_axis_size_partitioner(*args, **kwargs)
It accepts the same arguments as tensorflow.variable_axis_size_partitioner
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.variable_axis_size_partitioner(x1, *args, **kwargs)
is equivalent to
builder.variable_axis_size_partitioner(*args, **kwargs)(x1)
tensorflow.variable_axis_size_partitioner
Get a partitioner for VariableScope to keep shards below `max_shard_bytes`.
This partitioner will shard a Variable along one axis, attempting to keep
the maximum shard size below max_shard_bytes
. In practice, this is not
always possible when sharding along only one axis. When this happens,
this axis is sharded as much as possible (i.e., every dimension becomes
a separate shard).
If the partitioner hits the max_shards
limit, then each shard may end up
larger than max_shard_bytes
. By default max_shards
equals None
and no
limit on the number of shards is enforced.
One reasonable value for max_shard_bytes
is (64 << 20) - 1
, or almost
64MB
, to keep below the protobuf byte limit.
Args:
max_shard_bytes: The maximum size any given shard is allowed to be.
axis: The axis to partition along. Default: outermost axis.
bytes_per_string_element: If the Variable
is of type string, this provides
an estimate of how large each scalar in the Variable
is.
max_shards: The maximum number of shards in int created taking precedence
over max_shard_bytes
.
Returns:
A partition function usable as the partitioner
argument to
variable_scope
, get_variable
, and get_partitioned_variable_list
.
Raises: ValueError: If any of the byte counts are non-positive.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def variable_op_scope(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.variable_op_scope(*args, **kwargs)
It accepts the same arguments as tensorflow.variable_op_scope
.
However, a partial with the arguments is returned which expects any argument x
and complete ignores it, such that
tensorflow.variable_op_scope(*args, **kwargs)
is equivalent to
builder.variable_op_scope(*args, **kwargs)(x)
tensorflow.variable_op_scope
Deprecated: context manager for defining an op that creates variables.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs)
def variable_scope(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.variable_scope(*args, **kwargs)
It accepts the same arguments as tensorflow.variable_scope
.
However, a partial with the arguments is returned which expects any argument x
and complete ignores it, such that
tensorflow.variable_scope(*args, **kwargs)
is equivalent to
builder.variable_scope(*args, **kwargs)(x)
tensorflow.variable_scope
Returns a context manager for defining ops that creates variables (layers).
This context manager validates that the (optional) values
are from
the same graph, ensures that graph is the default graph, and pushes a
name scope and a variable scope.
If name_or_scope
is not None, it is used as is. If scope
is None, then
default_name
is used. In that case, if the same name has been previously
used in the same scope, it will made unique be appending _N
to it.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.
Simple example of how to create a new variable:
python
with tf.variable_scope("foo"):
with tf.variable_scope("bar"):
v = tf.get_variable("v", [1])
assert v.name == "foo/bar/v:0"
Basic example of sharing a variable:
python
with tf.variable_scope("foo"):
v = tf.get_variable("v", [1])
with tf.variable_scope("foo", reuse=True):
v1 = tf.get_variable("v", [1])
assert v1 == v
Sharing a variable by capturing a scope and setting reuse:
python
with tf.variable_scope("foo") as scope:
v = tf.get_variable("v", [1])
scope.reuse_variables()
v1 = tf.get_variable("v", [1])
assert v1 == v
To prevent accidental sharing of variables, we raise an exception when getting an existing variable in a non-reusing scope.
python
with tf.variable_scope("foo"):
v = tf.get_variable("v", [1])
v1 = tf.get_variable("v", [1])
# Raises ValueError("... v already exists ...").
Similarly, we raise an exception when trying to get a variable that does not exist in reuse mode.
python
with tf.variable_scope("foo", reuse=True):
v = tf.get_variable("v", [1])
# Raises ValueError("... v does not exists ...").
Note that the reuse
flag is inherited: if we open a reusing scope,
then all its sub-scopes become reusing as well.
Args:
name_or_scope: string
or VariableScope
: the scope to open.
default_name: The default name to use if the name_or_scope
argument is
None
, this name will be uniquified. If name_or_scope is provided it
won't be used and therefore it is not required and can be None.
values: The list of Tensor
arguments that are passed to the op function.
initializer: default initializer for variables within this scope.
regularizer: default regularizer for variables within this scope.
caching_device: default caching device for variables within this scope.
partitioner: default partitioner for variables within this scope.
custom_getter: default custom getter for variables within this scope.
reuse: True
or None
; if True
, we go into reuse mode for this scope as
well as all sub-scopes; if None
, we just inherit the parent scope reuse.
dtype: type of variables created in this scope (defaults to the type
in the passed scope, or inherited from parent scope).
Returns: A scope that can be to captured and reused.
Raises:
ValueError: when trying to reuse within a create scope, or create within
a reuse scope, or if reuse is not None
or True
.
TypeError: when the types of some arguments are not appropriate.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then0(fn, *args, **kwargs)
def verify_tensor_all_finite(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.verify_tensor_all_finite(*args, **kwargs)
It accepts the same arguments as tensorflow.verify_tensor_all_finite
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.verify_tensor_all_finite(x1, *args, **kwargs)
is equivalent to
builder.verify_tensor_all_finite(*args, **kwargs)(x1)
tensorflow.verify_tensor_all_finite
Assert that the tensor does not contain any NaN's or Inf's.
Args: t: Tensor to check. msg: Message to log on failure. name: A name for this operation (optional).
Returns:
Same tensor as t
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def weighted_cross_entropy_with_logits(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.weighted_cross_entropy_with_logits(*args, **kwargs)
It accepts the same arguments as tf.nn.weighted_cross_entropy_with_logits
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.weighted_cross_entropy_with_logits(x1, *args, **kwargs)
is equivalent to
builder.weighted_cross_entropy_with_logits(*args, **kwargs)(x1)
tf.nn.weighted_cross_entropy_with_logits
Computes a weighted cross entropy.
This is like sigmoid_cross_entropy_with_logits()
except that pos_weight
,
allows one to trade off recall and precision by up- or down-weighting the
cost of a positive error relative to a negative error.
The usual cross-entropy cost is defined as:
targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))
The argument pos_weight
is used as a multiplier for the positive targets:
targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits))
For brevity, let x = logits
, z = targets
, q = pos_weight
.
The loss is:
qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))
Setting l = (1 + (q - 1) * z)
, to ensure stability and avoid overflow,
the implementation uses
(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))
logits
and targets
must have the same type and shape.
Args:
logits: A Tensor
of type float32
or float64
.
targets: A Tensor
of the same type and shape as logits
.
pos_weight: A coefficient to use on the positive examples.
name: A name for the operation (optional).
Returns:
A Tensor
of the same shape as logits
with the componentwise
weightedlogistic losses.
Raises:
ValueError: If logits
and targets
do not have the same shape.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def weighted_cross_entropy_with_logits_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.weighted_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.weighted_cross_entropy_with_logits_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.weighted_cross_entropy_with_logits`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def weighted_cross_entropy_with_logits_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.weighted_cross_entropy_with_logits_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.weighted_cross_entropy_with_logits_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.weighted_cross_entropy_with_logits`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def where(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.where(*args, **kwargs)
It accepts the same arguments as tensorflow.where
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.where(x1, *args, **kwargs)
is equivalent to
builder.where(*args, **kwargs)(x1)
tensorflow.where
Returns locations of true values in a boolean tensor.
This operation returns the coordinates of true elements in input
. The
coordinates are returned in a 2-D tensor where the first dimension (rows)
represents the number of true elements, and the second dimension (columns)
represents the coordinates of the true elements. Keep in mind, the shape of
the output tensor can vary depending on how many true values there are in
input
. Indices are output in row-major order.
For example:
```prettyprint
'input' tensor is [[True, False]
[True, False]]
'input' has two true values, so output has two coordinates.
'input' has rank of 2, so coordinates have two indices.
where(input) ==> [[0, 0], [1, 0]]
input
tensor is [[[True, False]
[True, False]]
[[False, True]
[False, True]]
[[False, False]
[False, True]]]
'input' has 5 true values, so output has 5 coordinates.
'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]] ```
Args:
input: A Tensor
of type bool
.
name: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def while_loop(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.while_loop(*args, **kwargs)
It accepts the same arguments as tensorflow.while_loop
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.while_loop(x1, *args, **kwargs)
is equivalent to
builder.while_loop(*args, **kwargs)(x1)
tensorflow.while_loop
Repeat `body` while the condition `cond` is true.
cond
is a callable returning a boolean scalar tensor. body
is a callable
returning a (possibly nested) tuple or list of tensors of the same
arity (length and structure) and types as loop_vars
. loop_vars
is a
(possibly nested) tuple or list of tensors that is passed to both cond
and body
. cond
and body
both take as many arguments as there are
loop_vars
.
While cond
evaluates to true, body
is executed.
In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations.
For correctness, tf.while_loop()
strictly enforces shape invariants for
the loop variables. A shape invariant is a (possibly partial) shape that
is unchanged across the iterations of the loop. An error will be raised
if the shape of a loop variable after an iteration is determined to be more
general than or incompatible with its shape invariant. For example, a shape
of [11, None] is more general than a shape of [11, 17], and [11, 21] is not
compatible with [11, 17]. By default (if the argument shape_invariants
is
not specified), it is assumed that the initial shape of each tensor in
loop_vars
is the same in every iteration. The shape_invariants
argument
allows the caller to specify a less specific shape invariant for each loop
variable, which is needed if the shape varies between iterations. The
Tensor.set_shape()
function may also be used in the body
function to indicate that
the output loop variable has a particular shape. The shape invariant for
SparseTensor and IndexedSlices are treated specially as follows:
a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.shape property. It must be the shape of a vector.
b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]).
while_loop
implements non-strict semantics, enabling multiple iterations
to run in parallel. The maximum number of parallel iterations can be
controlled by parallel_iterations
, which gives users some control over
memory consumption and execution order. For correct programs, while_loop
should return the same result for any parallel_iterations > 0.
For training, TensorFlow remembers the tensors that are produced in the forward inference but needed in back propagation. These tensors can be a main source of memory consumption and often cause OOM problems when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches.
Args:
cond: A callable that represents the termination condition of the loop.
body: A callable that represents the loop body.
loop_vars: A (possibly nested) tuple or list of numpy array, Tensor
,
and TensorArray
objects.
shape_invariants: The shape invariants for the loop variables.
parallel_iterations: The number of iterations allowed to run in parallel.
back_prop: Whether backprop is enabled for this while loop.
swap_memory: Whether GPU-CPU memory swap is enabled for this loop.
name: Optional name prefix for the returned tensors.
Returns:
The output tensors for the loop variables after the loop. When the length
of loop_vars
is 1 this is a Tensor, TensorArray or IndexedSlice and when
the length of loop_vars
is greater than 1 it returns a list.
Raises:
TypeError: if cond
or body
is not callable.
ValueError: if loop_vars
is empty.
Example:
python
i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
Example with nesting:
python
ijk_0 = (tf.constant(0), (tf.constant(1), tf.constant(2)))
c = lambda i, (j, k): i < 10
b = lambda i, (j, k): (i + 1, ((j + k), (j - k)))
ijk_final = tf.while_loop(c, b, ijk_0)
Example using shape_invariants:
python
i0 = tf.constant(0)
m0 = tf.ones([2, 2])
c = lambda i, m: i < 10
b = lambda i, m: [i+1, tf.concat(0, [m, m])]
tf.while_loop(
c, b, loop_vars=[i0, m0],
shape_invariants=[i0.get_shape(), tensor_shape.TensorShape([None, 2])])
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def xw_plus_b(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.xw_plus_b(*args, **kwargs)
It accepts the same arguments as tf.nn.xw_plus_b
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.xw_plus_b(x1, *args, **kwargs)
is equivalent to
builder.xw_plus_b(*args, **kwargs)(x1)
tf.nn.xw_plus_b
Computes matmul(x, weights) + biases.
Args: x: a 2D tensor. Dimensions typically: batch, in_units weights: a 2D tensor. Dimensions typically: in_units, out_units biases: a 1D tensor. Dimensions: out_units name: A name for the operation (optional). If not specified "xw_plus_b" is used.
Returns: A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def xw_plus_b_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.xw_plus_b_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.xw_plus_b_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.xw_plus_b`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def xw_plus_b_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.xw_plus_b_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.xw_plus_b_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.xw_plus_b`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def xw_plus_b_v1(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.xw_plus_b_v1(*args, **kwargs)
It accepts the same arguments as tf.nn.xw_plus_b_v1
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.xw_plus_b_v1(x1, *args, **kwargs)
is equivalent to
builder.xw_plus_b_v1(*args, **kwargs)(x1)
tf.nn.xw_plus_b_v1
Computes matmul(x, weights) + biases.
This is a deprecated version of that will soon be removed.
Args: x: a 2D tensor. Dimensions typically: batch, in_units weights: a 2D tensor. Dimensions typically: in_units, out_units biases: a 1D tensor. Dimensions: out_units name: A name for the operation (optional). If not specified "xw_plus_b_v1" is used.
Returns: A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def xw_plus_b_v1_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.xw_plus_b_v1_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.xw_plus_b_v1_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.xw_plus_b_v1`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def xw_plus_b_v1_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.xw_plus_b_v1_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.xw_plus_b_v1_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.xw_plus_b_v1`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zero_fraction(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zero_fraction(*args, **kwargs)
It accepts the same arguments as tf.nn.zero_fraction
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.nn.zero_fraction(x1, *args, **kwargs)
is equivalent to
builder.zero_fraction(*args, **kwargs)(x1)
tf.nn.zero_fraction
Returns the fraction of zeros in `value`.
If value
is empty, the result is nan
.
This is useful in summaries to measure and report sparsity. For example,
z = tf.Relu(...) summ = tf.scalar_summary('sparsity', tf.nn.zero_fraction(z))
Args: value: A tensor of numeric type. name: A name for the operation (optional).
Returns:
The fraction of zeros in value
, with type float32
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zero_fraction_conv2d_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zero_fraction_conv2d_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.convolution2d
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.convolution2d(x1, *args, **kwargs)
is equivalent to
builder.zero_fraction_conv2d_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.zero_fraction`.
tf.contrib.layers.convolution2d
Adds a 2D convolution followed by an optional batch_norm layer.
convolution2d
creates a variable called weights
, representing the
convolutional kernel, that is convolved with the inputs
to produce a
Tensor
of activations. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the activations. Finally, if activation_fn
is not None
,
it is applied to the activations as well.
Performs a'trous convolution with input stride equal to rate if rate is greater than one.
Args:
inputs: a 4-D tensor [batch_size, height, width, channels]
.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 [kernel_height, kernel_width]
of
of the filters. Can be an int if both values are the same.
stride: a list of length 2 [stride_height, stride_width]
.
Can be an int if both strides are the same. Note that presently
both strides must have the same value.
padding: one of VALID
or SAME
.
rate: integer. If less than or equal to 1, a standard convolution is used.
If greater than 1, than the a'trous convolution is applied and stride
must be set to 1.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: optional list of collections for all the variables or
a dictionay containing a different list of collection per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope
.
Returns: a tensor representing the output of the operation.
Raises:
ValueError: if both 'rate' and stride
are larger than one.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zero_fraction_layer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zero_fraction_layer(*args, **kwargs)
It accepts the same arguments as tf.contrib.layers.fully_connected
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tf.contrib.layers.fully_connected(x1, *args, **kwargs)
is equivalent to
builder.zero_fraction_layer(*args, **kwargs)(x1) and the keyword argument `activation_fn` is set to `tf.nn.zero_fraction`.
tf.contrib.layers.fully_connected
Adds a fully connected layer.
fully_connected
creates a variable called weights
, representing a fully
connected weight matrix, which is multiplied by the inputs
to produce a
Tensor
of hidden units. If a normalizer_fn
is provided (such as
batch_norm
), it is then applied. Otherwise, if normalizer_fn
is
None and a biases_initializer
is provided then a biases
variable would be
created and added the hidden units. Finally, if activation_fn
is not None
,
it is applied to the hidden units as well.
Note: that if inputs
have a rank greater than 2, then inputs
is flattened
prior to the initial matrix multiply by weights
.
Args:
inputs: A tensor of with at least rank 2 and value for the last dimension,
i.e. [batch_size, depth]
, [None, None, None, channels]
.
num_outputs: Integer or long, the number of output units in the layer.
activation_fn: activation function, set to None to skip it and maintain
a linear activation.
normalizer_fn: normalization function to use instead of biases
. If
normalizer_fn
is provided then biases_initializer
and
biases_regularizer
are ignored and biases
are not created nor added.
default set to None for no normalizer function
normalizer_params: normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections: Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections: collection to add the outputs.
trainable: If True
also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES
(see tf.Variable).
scope: Optional scope for variable_scope.
Returns: the tensor variable representing the result of the series of operations.
Raises: ValueError: if x has rank less than 2 or if its last dimension is not set.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zeros(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zeros(*args, **kwargs)
It accepts the same arguments as tensorflow.zeros
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.zeros(x1, *args, **kwargs)
is equivalent to
builder.zeros(*args, **kwargs)(x1)
tensorflow.zeros
Creates a tensor with all elements set to zero.
This operation returns a tensor of type dtype
with shape shape
and
all elements set to zero.
For example:
python
tf.zeros([3, 4], tf.int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
Args:
shape: Either a list of integers, or a 1-D Tensor
of type int32
.
dtype: The type of an element in the resulting Tensor
.
name: A name for the operation (optional).
Returns:
A Tensor
with all elements set to zero.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zeros_initializer(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zeros_initializer(*args, **kwargs)
It accepts the same arguments as tensorflow.zeros_initializer
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.zeros_initializer(x1, *args, **kwargs)
is equivalent to
builder.zeros_initializer(*args, **kwargs)(x1)
tensorflow.zeros_initializer
An adaptor for zeros() to match the Initializer spec.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zeros_like(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zeros_like(*args, **kwargs)
It accepts the same arguments as tensorflow.zeros_like
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.zeros_like(x1, *args, **kwargs)
is equivalent to
builder.zeros_like(*args, **kwargs)(x1)
tensorflow.zeros_like
Creates a tensor with all elements set to zero.
Given a single tensor (tensor
), this operation returns a tensor of the
same type and shape as tensor
with all elements set to zero. Optionally,
you can use dtype
to specify a new type for the returned tensor.
For example:
```python
'tensor' is [[1, 2, 3], [4, 5, 6]]
tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]] ```
Args:
tensor: A Tensor
.
dtype: A type for the returned Tensor
. Must be float32
, float64
,
int8
, int16
, int32
, int64
, uint8
, complex64
, or complex128
.
name: A name for the operation (optional).
optimize: if true, attempt to statically determine the shape of 'tensor'
and encode it as a constant.
Returns:
A Tensor
with all elements set to zero.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)
def zeta(
self, *args, **kwargs)
THIS METHOD IS AUTOMATICALLY GENERATED
builder.zeta(*args, **kwargs)
It accepts the same arguments as tensorflow.zeta
.
However, the 1st argument is omitted, a partial with the rest of the arguments is returned which expects the 1st argument such that
tensorflow.zeta(x1, *args, **kwargs)
is equivalent to
builder.zeta(*args, **kwargs)(x1)
tensorflow.zeta
Compute the Hurwitz zeta function \\(\zeta(x, q)\\).
The Hurwitz zeta function is defined as:
\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}
Args:
x: A Tensor
. Must be one of the following types: float32
, float64
.
q: A Tensor
. Must have the same type as x
.
name: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
@functools.wraps(fn) def method(self, *args, **kwargs): kwargs['_return_type'] = _return_type return self.Then(fn, *args, **kwargs)