content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
exports your audio as is without performing any lossy compression or conversion, preserving its fidelity. Other compressors can be used if you need your movie's audio track to be exported in a specific format, or if disk space or download speed is critical, but they may impact the quality of your movie's soundtrack negatively. - From the Rate menu, select a rate. It is best to export your audio at a rate that matches the rate of your original sound files. For example, if your file has an audio sample rate of 48 kHz and you choose a conversion rate of 22.05 kHz, the sound will play at the same speed, but higher frequencies will be missing, making it sound muffled. For reference, the standard sound quality is 48 kHz for broadcasting and DVD. Lower rates are liable to impact the quality of your movie's soundtrack negatively, but they can be useful if disk space or download speed is critical. - Select the Size of your audio's encoding. Also known as Bit Depth, this determines the amount of precision used to record each wavelength in the soundtrack. The standard size is 16-bit. If you choose 8-bit, the amount of disk space your sound track requires is halved, but the audio will sound muffled. - Select whether to Use the Mono or Stereo channel mode. Stereo sound has a separate sound track for the left and the right speakers, allowing to make the origin of each sound realistically match the origin of their corresponding action. If you choose Mono, your sound track may use less disk space, but both the left and right channels will be merged into a single track.
https://docs.toonboom.com/help/harmony-15/essentials/export/how-to-configure-quicktime-movie-export.html
2018-10-15T14:44:16
CC-MAIN-2018-43
1539583509326.21
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Export/HAR11/HAR11_export_movieSettings.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Export/HAR11/HAR11_export_videoCompressionSettings.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Export/HAR11/HAR11_export_quality.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Export/HAR11/HAR11_export_soundSettings.png', None], dtype=object) ]
docs.toonboom.com
LinkButton¶ Inherits: BaseButton < Control < CanvasItem < Node < Object Category: Core Numeric Constants¶ -. Description¶ This kind of buttons are primarily used when the interaction with the button causes a context change (like linking to a web page). Member Function Description¶ Returns the text of the button. Returns the underline mode for this button. Sets the text of the button. Sets the underline mode for this button, the argument must be one of the LinkButton constants (see constants section).
http://docs.godotengine.org/en/2.1/classes/class_linkbutton.html
2018-10-15T14:31:25
CC-MAIN-2018-43
1539583509326.21
[]
docs.godotengine.org
Example Snippet: SQS, CloudWatch, and SNS This example adds an Amazon SQS queue and an alarm on queue depth to the environment. The properties that you see in this example are the minimum required properties that you must set for each of these resources. You can download the example at SQS, SNS, and CloudWatch. Note This example creates AWS resources, which you might be charged for. For more information about AWS pricing, see. Some services are part of the AWS Free Usage Tier. If you are a new customer, you can test drive these services for free. See for more information. To use this example, do the following: Create an .ebextensionsdirectory in the top-level directory of your source bundle. Create two configuration files with the .configextension and place them in your .ebextensionsdirectory. One configuration file defines the resources, and the other configuration file defines the options. Deploy your application to Elastic Beanstalk. YAML relies on consistent indentation. Match the indentation level when replacing content in an example configuration file and ensure that your text editor uses spaces, not tab characters, to indent. Create a configuration file (e.g., sqs.config) that defines the resources. In this example, we create an SQS queue and define the VisbilityTimeout property in the MySQSQueue resource. Next, we create an SNS Topic and specify that email gets sent to [email protected] when the alarm is fired. Finally, we create a CloudWatch alarm if the queue grows beyond 10 messages. In the Dimensions property, we specify the name of the dimension and the value representing the dimension measurement. We use Fn::GetAtt to return the value of QueueName from MySQSQueue. #This sample requires you to create a separate configuration file to define the custom options for the SNS topic and SQS queue. Resources: MySQSQueue: Type: AWS::SQS::Queue Properties: VisibilityTimeout: Fn::GetOptionSetting: OptionName: VisibilityTimeout DefaultValue: 30 AlarmTopic: Type: AWS::SNS::Topic Properties: Subscription: - Endpoint: Fn::GetOptionSetting: OptionName: AlarmEmail DefaultValue: "[email protected]" Protocol: email QueueDepthAlarm: Type: AWS::CloudWatch::Alarm Properties: AlarmDescription: "Alarm if queue depth grows beyond 10 messages" Namespace: "AWS/SQS" MetricName: ApproximateNumberOfMessagesVisible Dimensions: - Name: QueueName Value : { "Fn::GetAtt" : [ "MySQSQueue", "QueueName"] } Statistic: Sum Period: 300 EvaluationPeriods: 1 Threshold: 10 ComparisonOperator: GreaterThanThreshold AlarmActions: - Ref: AlarmTopic InsufficientDataActions: - Ref: AlarmTopic Outputs : QueueURL: Description : "URL of newly created SQS Queue" Value : { Ref : "MySQSQueue" } QueueARN : Description : "ARN of newly created SQS Queue" Value : { "Fn::GetAtt" : [ "MySQSQueue", "Arn"]} QueueName : Description : "Name newly created SQS Queue" Value : { "Fn::GetAtt" : [ "MySQSQueue", "QueueName"]} For more information about the resources used in this example configuration file, see the following references: Create a separate configuration file called options.config and define the custom option settings. option_settings: "aws:elasticbeanstalk:customoption": VisibilityTimeout : 30 AlarmEmail : "[email protected]" These lines tell Elastic Beanstalk to get the values for the VisibilityTimeout and Subscription Endpoint properties from the VisibilityTimeout and Subscription Endpoint values in a config file (options.config in our example) that contains an option_settings section with an aws:elasticbeanstalk:customoption section that contains a name-value pair that contains the actual value to use. In the example above, this means 30 and "[email protected]" would be used for the values. For more information about Fn::GetOptionSetting, see Functions
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-environment-resources-sqs.html
2018-10-15T15:53:18
CC-MAIN-2018-43
1539583509326.21
[]
docs.aws.amazon.com
The Dashboard Designer provides the capability to connect to multiple types of SQL databases using the Data Source wizard. You can also use the data access API to connect to the database and select the required data in code. This tutorial describes how to establish a connection to a PostgreSQL database and select the required data. To connect to various SQL databases, the Dashboard Designer requires a corresponding provider to be installed on the client machine. To learn more, see Supported Data Sources. To connect to the PostgreSQL database in the Dashboard Designer, perform the following steps. Click the New Data Source button in the Data Source ribbon tab. On the first page of the invoked Data Source Wizard dialog, select Database and click Next. On the next page, select the PostgreSQL data provider and specify the required connection parameters. Server name Specifies the name of the PostgreSQL database server to which the connection should be established. Port Specifies the port used to connect to the PostgreSQL database server. User name Specifies the user name used to authenticate to the PostgreSQL database server. Specifies the password used to authenticate to the PostgreSQL database server. Database Specifies the name of the database that contains the required data. last page, you can optionally add query parameters and preview data. Click Finish to create the data source. To create a data source that uses a connection to the PostgreSQL database, create the instance of the DashboardSqlDataSource class and perform the following steps. Specify connection parameters to the PostgreSQL database. Create the PostgreSqlConnectionParameters class object and specify the following properties. Assign the resulting PostgreSql PostgreSQL server. using DevExpress.DashboardCommon; using DevExpress.DataAccess.ConnectionParameters; using DevExpress.DataAccess.Sql; // ... PostgreSqlConnectionParameters postgreParams = new PostgreSqlConnectionParameters(); postgreParams.ServerName = "localhost"; postgreParams.PortNumber = 5432; postgreParams.DatabaseName = "Northwind"; postgreParams.UserName = "Admin"; postgreParams.Password = "password"; DashboardSqlDataSource sqlDataSource = new DashboardSqlDataSource("Data Source 1", postgreParams); SelectQuery selectQuery = SelectQueryFluentBuilder .AddTable("SalesPerson") .SelectColumns("CategoryName", "Extended Price") .Build("Query 1"); sqlDataSource.Queries.Add(selectQuery); sqlDataSource.Fill(); dashboard.DataSources.Add(sqlDataSource);
https://docs.devexpress.com/Dashboard/113922/creating-dashboards/creating-dashboards-in-the-winforms-designer/providing-data/sql-data-source/connecting-to-sql-databases/postgresql
2018-10-15T14:45:19
CC-MAIN-2018-43
1539583509326.21
[]
docs.devexpress.com
Upgrade Ambari Server. On the host running Ambari Server: For RHEL/CentOS/Oracle Linux: yum clean all yum info ambari-server In the info output, visually validate that there is an available version containing "2.6" yum upgrade ambari-server Check for upgrade success by noting progress during the Ambari Server installation process you started in Step 5. As the process runs, the console displays output similar, although not identical, to the following: Setting up Upgrade Process Resolving Dependencies --> Running transaction check If the upgrade fails, the console displays output similar to the following: Setting up Upgrade Process No Packages marked for Update A successful upgrade displays output similar to the following: Updated: ambari-server.noarch 0:2.6.1-143 Complete!
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.1/bk_installing-hdf-on-hdp-ppc/content/ch01s04.html
2018-10-15T16:15:00
CC-MAIN-2018-43
1539583509326.21
[]
docs.hortonworks.com
Tcl8.6.7/Tk8.6.7 Documentation > [incr Tcl] Package Commands, version 4.1.0 > class - itcl::class — create a class of objects - SYNOPSIS - DESCRIPTION - CLASS DEFINITIONS - inherit baseClass ?baseClass...? - constructor args ?init? body - destructor body - className - objName info option ?args...? - CHAINING METHODS/PROCS - AUTO-LOADING - C PROCEDURES - KEYWORDS NAMEitcl::class — create a class of objects SYNOPSISitcl::class className { inherit baseClass ?baseClass...? constructor args ?init? body destructor body method name ?args? ?body? proc name ?args? ?body? variable varName ?init? ?config? common varName ?init? public command ?arg arg ...? protected command ?arg arg ...? private command ?arg arg ...? set varName ?value? array option ?arg arg ...? } className objName ?arg arg ...? objName method ?arg arg ...? className::proc ?arg arg ...? DESCRIPTIONThe fundamental construct in [incr Tcl] is the class definition. Each class acts as a template for actual objects that can be created. The class itself is a namespace which contains things common to all objects. Each object has its own unique bundle of data which contains instances of the "variables" defined in the class definition. Each object also has a built-in variable named "this", which contains the name of the object. Classes can also have "common" data members that are shared by all objects in a class. Two types of functions can be included in the class definition. "Methods" are functions which operate on a specific object, and therefore have access to both "variables" and "common" data members. "Procs" are ordinary procedures in the class class can only be defined once, although the bodies of class methods and procs can be defined again and again for interactive debugging. See the body and configbody commands for details. Each namespace can have its own collection of objects and classes. The list of classes available in the current context can be queried using the "itcl::find classes" command, and the list of objects, with the "itcl::find objects" command. A class can be deleted using the "delete class" command. Individual objects can be deleted using the "delete object" command. CLASS DEFINITIONS - class className definition - Provides the definition for a class named className. If the class className already exists, or if a command called className exists in the current namespace context, this command returns an error. If the class definition is successfully parsed, className becomes a command in the current context, handling the creation of objects for this class. The class definition is evaluated as a series of Tcl statements that define elements within the class. The following class definition commands are recognized: - inherit baseClass ?baseClass...? - Causes the current class to inherit characteristics from one or more base classes. Classes must have been defined by a previous class command, or must be available to the auto-loading facility (see "AUTO-LOADING" below). A single class definition can contain no more than one inherit command. The order of baseClass names in the inherit list affects the name resolution for class members. When the same member name appears in two or more base classes, the base class that appears first in the inherit list takes precedence. For example, if classes "Foo" and "Bar" both contain the member "x", and if another class class constructors that require arguments. Variables in the args specification can be accessed in the init code fragment, and passed to base class constructors. After evaluating the init statement, any base class constructors that have not been executed are invoked automatically without arguments. This ensures that all base classes are fully constructed before the constructor body is executed. By default, this scheme causes constructors to be invoked in order from least- to most-specific. This is exactly the opposite of the order that classes class hierarchy are invoked in order from most- to least-specific. This is the order that the classes class method, a method can be invoked like any other command-simply by using its name. Outside of the class context, the method name must be prefaced an object name, which provides the context for the data that it manipulates. Methods in a base class that are redefined in the current class, or hidden by another base class, can be qualified using the "className::method" syntax. - proc name ?args? ?body? - Declares a proc called name. A proc is an ordinary procedure within the class class method or proc, a proc can be invoked like any other command-simply by using its name. In any other namespace context, the proc is invoked using a qualified name like "className::proc". Procs in a base class that are redefined in the current class, or hidden by another base class, can also be accessed via their qualified name. - variable varName ?init? ?config? - Defines an object-specific variable named varName. All object-specific variables are automatically available in class class definition using the configbody command. - common varName ?init? - Declares a common variable named varName. Common variables reside in the class namespace and are shared by all objects belonging to the class. They are just like global variables, except that they need not be declared with the usual global command. They are automatically visible in all class class definition. This allows common data members to be initialized as arrays. For example: itcl::class Foo { class. CLASS USAGEOnce a class has been defined, the class name can be used as a command to create new objects belonging to the class. - className objName ?args...? - Creates a new object in class className with the name objName. Remaining arguments are passed to the constructor of the most-specific class. This in turn passes arguments to base class className<number>, where the className part is modified to start with a lowercase letter. In class Once class class where it was defined. If the "config" code generates an error, the variable is set back to its previous value, and the configure method returns an error. - objName isa className - Returns non-zero if the given className can be found in the object's heritage, and zero otherwise. - objName info option ?args...? - Returns information related to a particular object named objName, or to its class definition. The option parameter includes the following things, as well as the options recognized by the usual Tcl "info" command: - objName info class - Returns the name of the most-specific class for object objName. - objName info inherit - Returns the list of base classes as they were defined in the "inherit" command, or an empty string if this class has no base classes. - objName info heritage - Returns the current class name and the entire list of base classes in the order that they are traversed for member lookup and object destruction. - objName info function ?cmdName? ?-protection? ?-type? ?-name? ?-args? ?-body? - With no arguments, this command returns a list of all class/PROCSSometimes a base class has a method or proc that is redefined with the same name in a derived class. This is a way of making the derived class handle the same operations as the base class, but with its own specialized behavior. For example, suppose we have a Toaster class that looks like this: itcl::class Toaster { variable crumbs 0 method toast {nslices} { if {$crumbs > 50} { error "== FIRE! FIRE! ==" } set crumbs [expr $crumbs+4*$nslices] } method clean {} { set crumbs 0 } } We might create another class like SmartToaster that redefines the "toast" method. If we want to access the base class method, we can qualify it with the base class name, to avoid ambiguity: itcl::class SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [Toaster::toast $nslices] } } Instead of hard-coding the base class name, we can use the "chain" command like this: itcl::class SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [chain $nslices] } } The chain command searches through the class hierarchy for a slightly more generic (base class) implementation of a method or proc, and invokes it with the specified arguments. It starts at the current class context and searches through base classes in the order that they are reported by the "info heritage" command. If another implementation is not found, this command does nothing and returns the null string. AUTO-LOADINGClass definitions need not be loaded explicitly; they can be loaded as needed by the usual Tcl auto-loading facility. Each directory containing class definition files should have an accompanying "tclIndex" file. Each line in this file identifies a Tcl procedure or [incr Tcl] class definition and the file where the definition can be found. For example, suppose a directory contains the definitions for classes "Toaster" and "SmartToaster". Then the "tclIndex" file for this directory would look like: # Tcl autoload index file, version 2.0 for [incr T(:, classes will be auto-loaded as needed when used in an application. C PROCEDURESC procedures can be integrated into an [incr Tr Tcl] makes this possible by automatically setting up the context before executing the C procedure. This scheme provides a natural migration path for code development. Classes can be developed quickly using Tcl code to implement the bodies. An entire application can be built and tested. When necessary, individual bodies can be implemented with C code to improve performance.
http://docs.activestate.com/activetcl/8.6/tcl/ItclCmd/class.html
2018-10-15T15:40:02
CC-MAIN-2018-43
1539583509326.21
[]
docs.activestate.com
Compiling for Windows¶ Requirements¶ For compiling under Windows, the following is required: - Visual C++, Visual Studio Community (recommended), version 2013 (12.0) or later. Make sure you read Installing Visual Studio caveats below or you will have to run/download the installer again. - Python 2.7+ or Python 3.5+. - Pywin32 Python Extension for parallel builds (which increase the build speed by a great factor). - SCons build system. Setting up SCons¶ Python adds the interpreter (python.exe) to the path. It usually installs in C:\Python (or C:\Python[Version]). SCons installs inside the Python install (typically in the Scripts folder) and provides a batch file called scons.bat. The location of this file can be added to the path or it can simply be copied to C:\Python together with the interpreter executable. To check whether you have installed Python and SCons correctly, you can type python --version and scons --version into the Windows Command Prompt ( cmd.exe). If commands above do not work, make sure you add Python to your PATH environment variable after installing it, and check again. Setting up Pywin32¶ pywin32-221.win32-py2.7.exe. The amd64 version of Pywin32 is for a 64-bit version of Python pywin32-221.win-amd64-py2.7.exe. Change the py number to install for your version of Python (check via python --version mentioned above). Installing Visual Studio caveats¶ If installing Visual Studio 2015 or later, make sure to run Custom installation, not Typical and select C++ as language there (and any other things you might need). The installer does not install C++ by default. C++ was the only language made optional*. Downloading Godot’s source¶ Godot’s source is hosted on GitHub. Downloading it (cloning) via Git is recommended. The tutorial will presume from now on that you placed the source into C:\godot. Compiling¶ SCons will not be able out of the box to compile from the Windows Command Prompt ( cmd.exe) because SCons and Visual C++ compiler will not be able to locate environment variables and executables they need for compilation. Therefore, you need to start a Visual Studio command prompt. It sets up environment variables needed by SCons to locate the compiler. It should be called similar to one of the below names (for your respective version of Visual Studio): - “Developer Command Prompt for VS2013” - “VS2013 x64 Native Tools Command Prompt” - “VS2013 x86 Native Tools Command Prompt” - “VS2013 x64 Cross Tools Command Prompt” - “VS2013 x86 Cross Tools Command Prompt” You should be able to find at least the Developer Command Prompt for your version of Visual Studio in your start menu. However Visual Studio sometimes seems to not install some of the above shortcuts, except the Developer Console at these locations that are automatically searched by the start menu search option: Win 7: C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Visual Studio 2015\Visual Studio Tools C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Visual Studio 2013\Visual Studio Tools If you found the Developer Console, it will do for now to create a 32-bit version of Godot, but if you want the 64-bit version, you might need to setup the prompts manually for easy access. If you don’t see some of the shortcuts, “How the prompts actually work” section below will explain how to setup these prompts if you need them. About the Developer/Tools Command Prompts and the Visual C++ compiler¶ There is a few things you need to know about these consoles and the Visual C++ compiler. Your Visual Studio installation will ship with several Visual C++ compilers, them being more or less identical, however each cl.exe (Visual C++ compiler) will compile Godot for a different architecture (32-bit x86 or 64-bit x86; the ARM compiler is not supported). The Developer Command Prompt will build a 32-bit version of Godot by using the 32-bit Visual C++ compiler. Native Tools Prompts (mentioned above) are used when you want the 32-bit cl.exe to compile a 32-bit executable (x86 Native Tools Command Prompt). For the 64-bit cl.exe, it will compile a 64-bit executable (x64 Native Tools Command Prompt). The Cross Tools are used when your Windows is using one architecture (32-bit, for example) and you need to compile to a different architecture (64-bit). As you might be familiar, 32-bit Windows can not run 64-bit executables, but you still might need to compile for them. For example: - “VS2013 x64 Cross Tools Command Prompt” will use a 32-bit cl.exe that will compile a 64 bit application. - “VS2013 x86 Cross Tools Command Prompt” will use a 64-bit cl.exe that will compile a 32-bit application. This one is useful if you are running a 32-bit Windows. On a 64-bit Windows, you can run any of above prompts and compilers ( cl.exe executables) because 64-bit Windows can run any 32-bit application. 32-bit Windows cannot run 64-bit executables, so the Visual Studio installer won’t even install shortcuts for some of these prompts. Note that you need to choose the Developer Console or the correct Tools Prompt to build Godot for the correct architecture. Use only Native Prompts if you are not sure yet what exactly Cross Compile Prompts do. Running SCons¶ Once inside the Developer Console/Tools Console Prompt, go to the root directory of the engine source code and type: C:\godot> scons platform=windows Tip: if you installed “Pywin32 Python Extension” you can append the -j command to instruct SCons to run parallel builds like this: if you setup the “Pywin32 Python Extension”. If all goes well, the resulting binary executable will be placed in C:\godot\bin\ with the name of godot.windows.tools.32.exe or godot.windows.tools.64.exe. SCons will automatically detect what compiler architecture the environment (the prompt) is setup for and will build a corresponding executable. This executable file contains the whole engine and runs without any dependencies. Executing it will bring up the Project Manager. How the prompts actually work¶ The Visual Studio command prompts are just shortcuts that call the standard Command Prompt and have it run a batch file before giving you control. The batch file itself is called vcvarsall.bat and it sets up environment variables, including the PATH variable, so that the correct version of the compiler can be run. The Developer Command Prompt calls a different file called VsDevCmd.bat but none of the other tools that this batch file enables are needed by Godot/SCons. Since you are probably using Visual Studio 2013 or 2015, if you need to recreate them manually, use the below folders, or place them on the desktop/taskbar: C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Visual Studio 2015\Visual Studio Tools C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Visual Studio 2013\Visual Studio Tools Start the creation of the shortcut by pressing the right mouse button/New/Shortcut in an empty place in your desired location. Then copy one of these commands below for the corresponding tool you need into the “Path” and “Name” sections of the shortcut creation wizard, and fix the path to the batch file if needed. - Visual Studio 2013 is in the “Microsoft Visual Studio 12.0” folder. - Visual Studio 2015 is in the “Microsoft Visual Studio 14.0” folder. - etc. Name: Developer Command Prompt for VS2013 Path: %comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\VsDevCmd.bat"" Name: VS2013 x64 Cross Tools Command Prompt Path: %comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat"" x86_amd64 Name: VS2013 x64 Native Tools Command Prompt Path: %comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat"" amd64 Name: VS2013 x86 Native Tools Command Prompt Path: %comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat"" x86 Name: VS2013 x86 Cross Tools Command Prompt Path: %comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat"" amd64_x86 After you create the shortcut, in the shortcut’s properties, that you can access by right clicking with your mouse on the shortcut itself, you can choose the starting directory of the command prompt (“Start in” field). Some of these shortcuts (namely the 64-bit compilers) seem to not be available in the Express edition of Visual Studio or Visual C++. Before recreating the commands, make sure that cl.exe executables are present in one of these locations, they are the actual compilers for the architecture you want to build from the command prompt. x86 (32-bit) cl.exe C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\cl.exe x86 (32-bit) cl.exe for cross-compiling for 64-bit Windows. C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64\cl.exe x64 (64-bit) cl.exe C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\cl.exe x64 (64-bit) cl.exe for cross-compiling for 32-bit Windows. C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64_x86\cl.exe In case you are wondering what these prompt shortcuts do, they call cmd.exe with the \k option and have it run a Batch file. %comspec% - path to cmd.exe \k - keep alive option of the command prompt remainder - command to run via cmd.exe cmd.exe \k(eep cmd.exe alive after commands behind this option run) ""runme.bat"" with_this_option How to run an automated build of Godot¶ If you just need to run the compilation process via a Batch file or directly in the Windows Command Prompt you need to use the following command: "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" x86 with one of the following parameters: - x86 (32-bit cl.exe to compile for the 32-bit architecture) - amd64 (64-bit cl.exe to compile for the 64-bit architecture) - x86_amd64 (32-bit cl.exe to compile for the 64-bit architecture) - amd64_x86 (64-bit cl.exe to compile for the 32-bit architecture) and after that one, you can run SCons: scons platform=windows or you can run them together: 32-bit Godot "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" x86 && scons platform=windows 64-bit Godot "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" amd64 && scons platform=windows via the Visual Studio Build button. However, make sure that you have installed Pywin32 so that parallel (-j) builds work properly. If you need to edit the compilation distro, here are some known ones: Before allowing you to attempt the compilation, SCons will check for the following binaries in your $PATH:): [email protected]:~$ ${MINGW32_PREFIX}gcc --version i686-w64-mingw32-gcc (GCC) 6.1.0 20160427 (Mageia MinGW 6.1.0-1.mga6) Troubleshooting¶ Cross-compiling from some versions of Ubuntu may lead to this bug, due to a default configuration lacking support for POSIX threading. You can change that configuration following those instructions,> And> Creating Windows export templates¶ Windows export templates are created by compiling Godot as release, with the following flags: - (using Mingw32 command prompt, using the bits parameter) C:\godot> scons platform=windows tools=no target=release bits=32 C:\godot> scons platform=windows tools=no target=release_debug bits=32 - (using Mingw-w64 command prompt, using the bits parameter) C:\godot> scons platform=windows tools=no target=release bits=64 C:\godot> scons platform=windows tools=no target=release_debug bits=64 - (using the Visual Studio command prompts for the correct architecture, notice the lack of bits parameter) C:\godot> scons platform=windows tools=no target=release C:\godot> scons platform=windows tools=no target=release_debug If you plan on replacing the standard templates, copy these to: C:\USERS\YOURUSER\AppData\Roaming\Godot\Templates With the following names: windows_32_debug.exe windows_32_release.exe windows_64_debug.exe windows_64_release.exe.
http://docs.godotengine.org/en/3.0/development/compiling/compiling_for_windows.html
2018-10-15T14:45:14
CC-MAIN-2018-43
1539583509326.21
[array(['../../_images/wintemplates.png', '../../_images/wintemplates.png'], dtype=object) ]
docs.godotengine.org
Introduction¶ Observium is its core a collaborative project, and welcomes contributions from its users. In order to maintain the quality of the software we have strict quality controls on contributions, particularly involving UI elements, but if you read our development documentation and guidelines, you shouldn't have much trouble! Licensing and Copyright¶ By contributing code you are making a donation of the code unencumbered by licensing restrictions to Observium Limited for use within Observium. Observium Community Edition is currently licensed under a modified QPL-1.0 license, and Professional Edition under a commercial license. We reserve the right to change the license for all or part of the code at any time. Design and Development Philosophy¶ Observium's primary driving philosophy has remained unchanged for over a decade and can be summed up as follows : - Minimal User Intervention - We aim to produce a platform which requires the minimum of user intervention. This means a focus on autodiscovery and an emphasis on sane defaults and low configuration requirements - Intuitive Aesthetics - We consider good aesthetics and an intutive user experience to be paramount to the usefulness of the platform - Code Maintainability - We strive to ensure that our codebase is easy to read and easy to maintain. We prefer to keep an emphasis on code quality over performance and minor optimisation What can be done?¶ Our view is that everything can always be improved, so there is really no limit to what can be done. Key areas of possible improvement include: - Documentation - New device and MIB support - Improving presentation and use of existing data - Migration of older code to newer standards Where to start¶ - Check the documentation below - Check with the developers on the IRC channel or Mailing Lists what you want to do, to avoid duplicating or doing unnecessary work There is a lot of code of different ages in Observium, which has evolved through a number of different code stages. Please check with the developers first before taking something as a base to start from, as it may no longer be the preferred way of doing things!
https://docs.observium.org/developing/
2018-10-15T15:52:35
CC-MAIN-2018-43
1539583509326.21
[]
docs.observium.org
Ticket #1926 (closed task: invalid) port dialer2's journal to dialer3/ophonekitd/libframework-glib Description Port dialer2's journal to dialer3/ophonekitd/libframework-glib. Change History comment:2 Changed 10 years ago by zecke - Component changed from System Software to unknown And move to unknown until SHR has its own component. Note: See TracTickets for help on using tickets. Do not end up on the kernel mailinglist.
http://docs.openmoko.org/trac/ticket/1926
2018-10-15T14:36:06
CC-MAIN-2018-43
1539583509326.21
[]
docs.openmoko.org
Metrics is protocol agnostic, it means that you can push your data with OpenTSDB, and query it with Warp10 or vice versa. Metrics doesn't enforce you to a proprietary protocol. Instead, we believe the plurality of existing protocols from Open Source solutions can be used to achieve Pushing and Querying the platform. Supported protocols Each protocol provides different capabilities. Some will be easier than others but may have less features. We've tried to summary them with this simple table : Most of the protocols don't include authentification, so you need to add the tokens in the Basic Auth field. If you're wondering which protocol to choose, here is a simple guideline : - You want to push json? -> OpenTSDB - You want to instrument your code? -> Prometheus SDK + Beamium - You want powerful analytics? -> Warp10 & WarpScript - You want BI tools integration like Tableau, Power BI, Qlik? -> SQL Authentification and endpoints Metrics has builtin security to secure your data. In the previous section you've learnt where to get them from the manager. We've generated a default pair of tokens : - a READ token to Query - a WRITE token to Push Data Except for Warp10 (where it's provided as a specific Header for push and in the DSL payload for queries), this token will be used as the password in the Basic Auth. Most of the protocols are available through HTTPS endpoints. Here's the logic for pushing::[write token]@[protocol].[region].metrics.ovh.net For example : The user in the basic authenfication is discarded. Protocols Graphite Abstract Graphite is the first Time Series platform with basic analytics capabilities. This is the reason why many developers and sysadmin like it. Data Model Graphite's data model uses a dot-separated format that describes a metric name, e.g. : servers.srv_1.dc.gra1.cpu0.nice; How to Push data Since Graphite doesn't support authentication, we've developed a small proxy that fits on your host and accept pushes to TCP:2003 like Graphite. It's named Fossil and it's Open Source. Queries Queries over Graphite are performed with URL based query parameters, json payload or form payload. Query data with Graphite The full documentation is available at. curl '(os.cpu, 1048576)' To authenticate requests basic auth is used. You must fill the basic auth password with the read token available in your OVH Metrics Data Platform manager. Grafana Graphite is integrated with Grafana : Read more on Graphite builtin data source. Compatibility The Graphite API documentation is available at. We are currently supporting this calls: InfluxDB Abstract InfluxDB is a proprietary time series database that integrates with Telegraf. Data Model InfluxDB uses it own data model : <measurement>[,<tag_key>=<tag_value>[,<tag_key>=<tag_value>]] <field_key>=<field_value>[,<field_key>=<field_value>] [<timestamp>] Authentification Use Basic Auth with the URL ::[write token]@influxdb.[region].metrics.ovh.net How to push data The full documentation is available at Bash & curl curl -i -XPOST \ '' \ --data-binary \ 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000' Queries InfluxDB has its own Query DSL, that mimics SQL without being plain ANSI SQL. SELECT <field_key>[,<field_key>,<tag_key>] FROM <measurement_name>[,<measurement_name>] Metrics currently doesn't support Influx Query Language. Compatibility InfluxDB has the notion of databases. This concept doesn't exist within Metrics. If you need segmentation, you can use different Metrics project or isolate with an additional label. OpenTSDB Abstract OpenTSDB is a Scalable Time Series Database design to store and serve massive amounts of time series data without losing granularity. Authentification To push data to the platform, you will need a WRITE TOKEN. Except for Warp10, it’ll be used as the password in the basic access authentication like this: https://[whatever you want]:[write token]@opentsdb.[region].metrics.ovh.net How to Push data The full documentation is available at { "metric": "sys.cpu.nice", "timestamp": 1346846400, "value": 18, "tags": { "host": "web01", "dc": "lga" } } or [ { "metric": "sys.cpu.nice", "timestamp": 1346846400, "value": 18, "tags": { "host": "web01", "dc": "lga" } }, { "metric": "sys.cpu.nice", "timestamp": 1346846400, "value": 9, "tags": { "host": "web02", "dc": "lga" } } ] Bash & curl curl -X POST -d @opentsdb.json \ '' Python >>> import requests >>>>> headers = {'X-Warp10-Token': 'WRITE_TOKEN', 'content-type':'text/plain'} >>> payload = {} >>> payload["metric"] = "sys.cpu.nice" >>> payload["timestamp"] = "1346846400" >>> payload["value"] = 18 >>> tags = { "host": "web01", "dc": "lga"} >>> payload["tags"] = tags >>> r = requests.post(url, headers=headers, data=payload) >>> r.status_code Query data with OpenTSDB The full documentation is available at { "start": 1487066514, "end": 1487070014, "queries": [{ "metric": "os.cpu", "aggregator": "max", "downsample": "5m-avg", "tags": { "host": "*", "dc": "*" } }] } Bash & curl curl --data-binary @QUERY \ '' Grafana OpenTSDB in integrated with Grafana : Read more on OpenTSDB builtin data source. Compatibility We are currently supporting this calls: We do not support pushing using telnet mode Prometheus Abstract Prometheus is another open-source monitoring system that takes its root as a clone of Borgmon from Google. Prometheus has a unique approach for collecting measurements by pulling them and uses scrapers to read metrics from HTTP endpoints exposed (like /metrics) from hosts and applications. This mode has many advantages but it's not safe by design and if the scraper or network is down, you can loose metrics. In order to overcome these issues, we've developed beamium. Authentification To push data to the platform, you will need a WRITE TOKEN. Except for Warp10, it’ll be used as the password in the basic access authentication like this::[write token]@prometheus.[region].metrics.ovh.net How to Expose data If you want to use a Prometheus exporter (or instrument an app with a /metrics), we recommend to use Beamium, that will : - offer a DFO (Disk Fail Over in case of network failure) - scrape as much /metrics endpoint you could have - filter metrics - recover datapoints from a network outage - route datapoints to multiple destinations (or sinks) Beamium can parse Prometheus and Sensision(Warp10) formats. So every Prometheus exporter (node-exporter, haproxy-exporter, ...) will be compliant with Beamium. How to Push data We also provide a PushGateway compliant endpoint PromQL PromQL is a Query Language for Prometheus. It offers basic query capabilities, like OpenTSDB, plus a way to use operators between two series. Compatibility We currently support: - client libraries for instrumenting application code thanks to Beamium, our prometheus scraper. PromQL by Metrics is quite new has not been extensively tested. If you have any feedback : OVH Community. Grafana PromQL in integrated with Grafana : Read more on Prometheus builtin data source. SQL SQL support is coming and will allow Metrics integration with all solutions that provides JDBC/ODBC support. At first SQL suppot won't be ANSI SQL but a subset that allows to fetch data and perform basic aggregations, which is mostly enough to use your data from BI Tools. Warp10 The Warp 10 Platform is designed to collect, store and manipulate sensor data. Sensor data are ingested as sequences of measurements (also called time series). The Warp 10 Platform offers the possibility for each measurement to also have spatial metadata specifying the geographic coordinates and/or the elevation of the sensor at the time of the reading. Those augmented measurements form what we call Geo Time Series® (GTS). Compatibility Being based on Warp10, we are first class citizens for it. All functions and calls are supported: How to Push data The full documentation is available at 1380475081000000// foo{label0=val0,label1=val1} 42 Bash & curl curl -H 'X-Warp10-Token: WRITE_TOKEN' -H 'Transfer-Encoding: chunked' \ --data-binary @METRICS_FILE '' # or curl -H 'X-Warp10-Token: WRITE_TOKEN' --data-binary "1380475081000000// foo{label0=val0,label1=val1} 42" \ '' Python >>> import requests >>>>> headers = {'X-Warp10-Token': 'WRITE_TOKEN', 'content-type':'text/plain'} >>>>> r = requests.post(url, headers=headers, data=payload) >>> r.status_code WarpScript Warp10 is providing a Query Language called WarpScript, designed to manipulate time series. It features: - Server Side Analysis - Dataflow language - Rich programming QL (+800 functions) - Geo-Fencing capabilities - Unified Language (query, batch, streaming) Here's a WarpScript example: 'TOKEN_READ' 'token' STORE // Stocking token [ $token ‘temperature’ {} NOW 1 h ] FETCH // Fetching all values from now to 1 hour ago [ SWAP bucketizer.max 0 1 m 0 ] BUCKETIZE // Get max value for each minute [ SWAP mapper.round 0 0 0 ] MAP // Round to nearest decimal [ SWAP [] 15 filter.last.le ] FILTER // Filter points less than 15°C To help you getting started, we created a Warp10 Tour. How to Query data The full documentation is available at. curl -v --data-binary "'READ_TOKEN' 'test' {} NOW -1 FETCH" \ '' or // Egress token to use 'TOKEN_READ' 'token' STORE [ $token '~class.*' { 'foo' '=bar' } NOW 1 h ] FETCH // fetch from date (here NOW) to 1 hour ago. // or [ $token '~class.*' { 'foo' '=bar' } NOW -1 ] FETCH // fetch last point of each GTS matching the selector from date (here NOW) curl -X POST @script.mc2 \ '' Grafana WarpScript in integrated as data source with our Grafana.. MQTT support for Metrics is in early stage and considered Alpha. If you're interested in MQTT. Contact us through OVH Community.
https://docs.ovh.com/gb/en/metrics/using/
2018-10-15T15:44:53
CC-MAIN-2018-43
1539583509326.21
[]
docs.ovh.com
11 11 10 10 10 10 10 years ago by john_lee - Blocking 1640 added comment:13 Changed 10 years ago by wendy_hung - Status changed from assigned to closed - HasPatchForReview unset - Resolution set to wontfix this one is for GTA01, can't really reproduce this or test it. comment:14 Changed 17 months ago by Kennescoma Viagra Scaduto Conseguenze Mediche <a href=>Brand Cialis</a> Cialis Buy Online Cheap Il Viagra Aiuta <a href=>Cialis Online Cs</a> Cialis Laboratorios Lilly Viagra All'Estero <a href=>Low Cost Zoloft Online</a> Acheter Viagra Ou Buy Cipro Xr Online <a href=>Buy Cheap Zoloft Pills</a> Buy Doxycycline In Mexico China Viagra Online <a href=>Cheap Kamagra Pill</a> What Is Provera Used To Treat Viagra Adolescentes <a href=>Female Cialis</a> Dosage Cephalexin Bladder Infection 800mg Viagra <a href=>Cialis Online</a> Cialis Achat Sur Buy Cheap Metronidazole <a href=>Buy Zithromax</a> Difference In Amoxicillin And Augmentin Cialis Free Samples <a href=>Buy Levitra Best Price</a> Plavix Legally Isotretinoin Amex Accepted <a href=>Cialis Online Cs</a> Canadian Online Pharmacies Reviews Levitra 20 Mg Precio Farmacia <a href=>How To Order Cialis</a> Zithromax Used To Treat Chlamydia Buycialisonlinenowrx <a href=>Kamagra Oral Jelly</a> Real Isotretinoin The Least Expensive Cialis <a href=>Cheap Cialis And Viagra</a> Levaquin 750mg Gatigol Usa Online Pharmacy Cheap Retin A <a href=>Cheap Viagra Pill</a> Arzt Levitra Cialis Viagra Venta <a href=>By Cheap Viagra</a> Cialis Diabetes Propecia Esquizofrenia <a href=>Cialis Prices</a> Buy Amoxicillin 500mg Online Pharmach <a href=>Sildenafil 20mg</a> Propecia Minoxidil Combined Want To Buy Zentel <a href=>How To Buy Cialis</a> Fluoxetine 20mg Lioresal Et Alcool <a href=>Prices Zoloft</a> Levitra Pas Chere Viagra Pills For Women <a href=>Cheap Viagra Overnight</a> Diovan Valsartan 40mg Purchase Dyazide <a href=>Priligy 100mg</a> Viagra Likor Rezept Levitra For Sale On Ebay <a href=>Cheap Cialis Online</a> Cephalexin 6 Week Old Levitra Original Precio <a href=>Brand Zoloft Online</a> Cialis Generico E Originale Mezieres <a href=>By Cheap Viagra</a> Buy Doxycycline Online From Canada comment:15 Changed 13 months ago by RalphCleks cialis and ms <a href="">buy cialis online</a> cialis 20 mg efectos secundarios <a href=>webseite </a> comment:16 Changed 13 months ago by FelipeBuh web medical information <a href="">canadian pharmacy</a> canada drugs online <a href=>lasix generic</a> online pharmacy without prescription <a href="">cipro side effects</a>
http://docs.openmoko.org/trac/ticket/883
2018-10-15T14:44:04
CC-MAIN-2018-43
1539583509326.21
[]
docs.openmoko.org
Contents Now Platform Capabilities Previous Topic Next Topic Delete a metric type ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Delete a metric type Deleting a metric type entails deleting many related records. Before you beginRole required: assessment_admin or admin About this task You must delete some of these records manually before deleting the type, while the system deletes others automatically with the type. Procedure Delete the records associated with the type to delete: Assessment results (metric and category results) Assessment instance (questions and assessment instances, in that order) Assessment groups Delete the type. A confirmation dialog box appears and alerts you that certain records associated with the type will also be deleted. Click OK to delete the type and these related records: Scheduled job for assessment generation Business rule for assessable record generation Assessable records Metric categories Category users Stakeholders Metrics Metric definitions Decision matrixes Related ConceptsMetric types and assessable records On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/assessments/task/t_DeleteAMetricType.html
2018-10-15T15:31:18
CC-MAIN-2018-43
1539583509326.21
[]
docs.servicenow.com
exercise bike craigslist large size of stationary stand bikes recumbent bicycle archived on category with chicago. Related Post Left Hand Reel Room Saver Floating Icechest Body Shields Chess And Checker Set Melissa And Doug Stuffed Dog Mont Blanc Fineliner Refill Zip Together Sleeping Bag Wooden Gun Case Mountain Smith Backpack Used Spin Bikes Marcy Work Out Equipment Coleman Hot Water On Demand Coleman Pop Up Tent Snowshoes Clearance
http://top-docs.co/exercise-bike-craigslist/exercise-bike-craigslist-large-size-of-stationary-stand-bikes-recumbent-bicycle-archived-on-category-with-chicago/
2019-02-15T23:47:07
CC-MAIN-2019-09
1550247479627.17
[array(['http://top-docs.co/wp-content/uploads/2018/06/exercise-bike-craigslist-large-size-of-stationary-stand-bikes-recumbent-bicycle-archived-on-category-with-chicago.jpg', 'exercise bike craigslist large size of stationary stand bikes recumbent bicycle archived on category with chicago exercise bike craigslist large size of stationary stand bikes recumbent bicycle archived on category with chicago'], dtype=object) ]
top-docs.co
CreateEndpointConfig. Note. Request Syntax { "EndpointConfigName": " string", "KmsKeyId": " string", "ProductionVariants": [ { "AcceleratorType": " string", "InitialInstanceCount": number, "InitialVariantWeight": number, "InstanceType": " string", "ModelName": " string", "VariantName": " string" } ], "Tags": [ { "Key": " string", "Value": " string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - EndpointConfigName The name of the endpoint configuration. You specify this name in a CreateEndpoint request. Type: String Length Constraints: Maximum length of 63. Pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9])* Required: Yes - KmsKeyId The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint. Type: String Length Constraints: Maximum length of 2048. Required: No - ProductionVariants An array of ProductionVariantobjects, one for each model that you want to host at this endpoint. Type: Array of ProductionVariant objects Array Members: Minimum number of 1 item. Required: Yes An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "EndpointConfigArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - EndpointConfigArn The Amazon Resource Name (ARN) of the endpoint configuration. Type: String Length Constraints: Minimum length of 20. Maximum length of:
https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateEndpointConfig.html
2019-02-15T23:38:19
CC-MAIN-2019-09
1550247479627.17
[]
docs.aws.amazon.com
- ! System requirements Before you install Citrix Application Delivery Management (ADM), you must understand the software requirements, browser requirements, port information, license information, and limitations. Requirements for Citrix ADM release 12.1 Requirements for Citrix ADM on-prem agent Minimum Citrix ADC versions required for Citrix ADM feature * Citrix SD-WAN versions supported by Citrix ADM. Limitations In 12.1 Citrix ADM, release 12.1 - Requirements for Citrix ADM on-prem agent - Minimum Citrix ADC versions required for Citrix ADM feature - Requirements for Citrix SD-WAN instance management - Requirements for Citrix ADM analytics - Supported hypervisors - Supported operating systems and receiver versions - Supported browsers - Supported ports
https://docs.citrix.com/en-us/citrix-application-delivery-management-software/12-1/system-requirements.html
2019-02-16T00:20:55
CC-MAIN-2019-09
1550247479627.17
[]
docs.citrix.com
Having trouble setting up test coverage for your repo? In addition to the the Configuring Test Coverage help doc, these topics provide additional troubleshooting context: Basic requirements Client-side considerations Self help from technical documentation Contact Support Command Line Interface (CLI) Common Questions Specific error messages from the reporter Common GitHub scenarios Diff-coverage and total-coverage statuses are hanging Generating test coverage for branches, but not seeing coverage results in GitHub - Do you have coverage reporting configured for one of our supported languages? - Are you using the correct CC_TEST_REPORTER_ID? - Are you using single or multiple test coverage setups? Setup varies for each. - Are you running your tests outside of your root directory? You may need to use the --prefixoption. - Are you working with a Docker container? Check out this doc. - Coverage data is based on the data that you send us via your coverage payloads, which is created client-side. - Try removing Code Climate from the equation: (1) Look at coverage results locally. Are they what you expect? (2) Are your results what you expect, regardless of Code Climate? (3) How are they different while looking at Code Climate? - Output debug messages to your CI using the --debugflag with your cc-test-reporter. - View uploaded reports (with potential errors) in the Code Climate UI. These can be found under the Recent Reports section at the following URL: codeclimate.com/repos/repo#/settings/test_reporter - Search the test reporter GitHub project for specific error messages. - Read the test reporter's README for detailed instructions and low level usage commands. - Check out the sample/working test config files on the test reporter GitHub project here. - Ask a question and open an issue on the test reporter repo itself. open Issue | contact Support I see a "successfully sent" message in my CI, but no results show in Code Climate's UI. You will only see default branch coverage in Code Climate's UI. Coverage info for non-default branches is visible in GitHub using the browser extension. Possibly due to malformed payloads. Head to codeclimate.com/repos/repo#/settings/test_reporterto view a list of uploaded reports and potential errors. Add the --debugflag to view additional output. Make sure that you're sending ENV (not Git) values. See this doc Check to see if you're pinning to a specific test coverage reporter version. Where should I put the --debug flag? If you're running your tests in single builds, use after-build --debug. For tests run in parallel, please use both format-coverage --debugand upload-coverage --debug. What CI-specific environmental variables and calls should I use? Coverage results from one repo are showing on a different repo. You might be using the incorrect CC_TEST_REPORTER_ID. Check the test reporter id used in your test coverage config file or CI configurations. When Code Climate is down, it causes my builds to fail because I can't upload my coverage reports. How can I fix this? We're looking to update our reporter to account for this. In the meantime, we recommend the following workaround: after-build || echo “Skipping CC coverage upload” or upload-coverage || echo “Skipping CC coverage upload” Error: you must supply a CC_TEST_REPORTER_ID ENV variable or pass it via the --id/-r flag The reporter is unable to find your repo's test reporter ID. This value either needs to configured within the environment or passed directly as a CLI argument. export CC_TEST_REPORTER_ID=<your token> cc-test-reporter after-build --exit-code $? OR export CC_TEST_REPORTER_ID=<your token> cc-test-reporter after-build --id <your token> --exit-code $? Error: file not found The reporter is unable to find a file referenced with the test report. Does that file exist within your git repository? Was your test suite run within a different filesystem (such as in a docker container)? You may need to specify a prefix value. - For example, if you're running the tests within a docker container and your app code is located at /usr/src/app, the reporter run outside of the docker container will not be able to find files at the same absolute path. - Pass --prefix /usr/src/app to instruct the test reporter to strip the unknown base path. Invalid path part Most often, this is related to: - The reporter not able to find your test coverage results to upload them to Code Climate. - The reporter encountering a file that it can't process For #1, adding the -- prefix option to make the path mentioned relative to the project root. For #2, try excluding the mentioned file from your test coverage payloads. If you need the coverage results of that file, contact us and we'll help. Error: not find any viable formatter. available formatters: simplecov, lcov, coverage.py, clover, gocov, gcov, cobertura, jacoco The reporter is unable to find a formatter. This is often seen with Java projects, when the the path to source code can't be inferred. Instead of after-build, please use: 1) format-coverage, which includes: JACOCO_SOURCE_PATH: the path to Java source files (a new environment variable) coverage file: to path to JaCoCo coverage XML file (the first argument) upload-coverage AND 2) upload-coverage Invalid certificate When using the --insecure flag, any batch request will be made using HTTP, but the the main endpoint will still use the URL specified CC_TEST_REPORTER_COVERAGE_ENDPOINT. Change the endpoint from https: export CC_TEST_REPORTER_COVERAGE_ENDPOINT= To http: export CC_TEST_REPORTER_COVERAGE_ENDPOINT=
https://docs.codeclimate.com/docs/test-coverage-troubleshooting-tips
2019-02-15T22:52:55
CC-MAIN-2019-09
1550247479627.17
[]
docs.codeclimate.com
The LANSA Components that can be upgraded depend on what has already been installed. These can be: You may select only one LANSA for i system to upgrade. To upgrade another LANSA for i system, you must execute the upgrade again. If you attempt to upgrade an old system with a later upgrade not supported by this install, then the upgrade will not allow you to continue. You will need to refer to the correct upgrade path. When upgrading a LANSA for i system, you may also upgrade any combination of LANSA for i, LANSA Integrator - provided they are listed. If you are using a Multi-tier Web installation, you must only upgrade the LANSA Web Server by itself. If LANSA Integrator was originally installed at the same time as LANSA was installed, LANSA Integrator will automatically be selected for upgrade. Note: When LANSA Integrator or a Web Server are selected alone, Partition Initialization will be bypassed as it is not required. After completing your selections, press Enter to continue. Error messages will be displayed if: The screen Partition Initialization will list the default partition. You can create more partitions using the F6 command key and entering the following details: Partition Identifier Specify the identifier/mnemonic to be assigned to the new partition. Must be three characters long and consist of characters in the range A to Z, 0 to 9, @ (at sign), # (hash sign), and/or $ (dollar). No two partitions can have the same identifier. Partition Description The description cannot be blank. Module Library The name of the library in which the compiled RDML/RDMLX programs associated with this new partition are to be kept. This library must not be the same as the Module Library used by other partitions. It must not exist as it will be created by the initialization process. Once specified, this name can only be changed using LANSA's Housekeeping facility. Default File Library The name of the library for the files in this partition. The library must not exist as it will be created during the initialization process. Once specified, this name can only be changed using LANSA's Housekeeping facility. If left blank, the default will be the Module Library. From Partition Copy details for the new partition from this partition. If the partition is multilingual, the multilingual details will be copied to the new partition. This partition must already exist. Enable For RDMLX Allowable values are: YES – partition is RDMLX-enabled. NO - partition is not RDMLX-enabled. When you press Enter, you will be presented with the Create Partition screen again to enter another partition. Press F12 or F3 to continue. To review the options that are available by default for each partition, enter Y beside the partition. In the options list, select or deselect options as required.
https://docs.lansa.com/14/en/lansa040/content/lansa/ladins01_0105.htm
2019-02-15T22:46:46
CC-MAIN-2019-09
1550247479627.17
[]
docs.lansa.com
GetClassLongA function Retrieves the specified 32-bit (DWORD) value from the WNDCLASSEX structure associated with the specified window. Syntax DWORD GetClassLongA( HWND hWnd, int nIndex ); Parameters hWnd Type: HWND A handle to the window and, indirectly, the class to which the window belongs. nIndex Type: int The value to be retrieved. To retrieve a value from the extra class memory, specify the positive, zero-based byte offset of the value to be retrieved. Valid values are in the range zero through the number of bytes of extra class memory, minus four; for example, if you specified 12 or more bytes of extra class memory, a value of 8 would be an index to the third integer. To retrieve any other value from the WNDCLASSEX structure, specify one of the following values. Return Value Type: Type: DWORD If the function succeeds, the return value is the requested value. If the function fails, the return value is zero. To get extended error information, call GetLastError. Remarks Reserve extra class memory by specifying a nonzero value in the cbClsExtra member of the WNDCLASSEX structure used with the RegisterClassEx function. Requirements See Also Conceptual Reference
https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-getclasslonga
2019-02-15T22:52:51
CC-MAIN-2019-09
1550247479627.17
[]
docs.microsoft.com
TemplateMetadata Contains information about an email template. Contents - CreatedTimestamp The time and date the template was created. Type: Timestamp Required: No - Name The name of the template. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/ses/latest/APIReference/API_TemplateMetadata.html
2019-02-15T23:46:54
CC-MAIN-2019-09
1550247479627.17
[]
docs.aws.amazon.com
Install the JRE 1.8 Mac OS X users can install the Oracle JRE 8 (64-bit version) from the Homebrew throw Cask. It is enough to run brew cask install Caskroom/cask/java. Now you can check your JRE installation. Run terminal and execute command java -version. If you see java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) then all is ok, and you can move on to the next step! If you get an error check your installation and try to find a solution or a better tutorial online. Note. It's necessary to install Oracle JRE 8 with 64-bit version. Download Waves package and configure the application Download the latest version of waves.jar and the required .conf OS X.
https://docs.wavesplatform.com/ko/waves-full-node/how-to-install-a-node/on-mac.html
2019-02-16T00:13:45
CC-MAIN-2019-09
1550247479627.17
[]
docs.wavesplatform.com
While analytics gives us some answers, they also create many more questions: - Of the people who came to the site, how many visited from the US? - Of the US visitors, how many came from search engines? - And how much items were purchased by just those visitors? Segments are a way of saving customer behavioral groups that are important to the user or company. Once a segment is created, it is able to use for quick segmentation in other Woopra features such as Analytics Reports and the People View. Segmentation allows the user to narrowly focus attention on only the visitors they want to analyze, and probe deeper to make better decisions, leveraging multiple types of data within Woopra. Harnessing Actions/Action Properties, Visit Properties, Visitor Properties or a combination of those within the segmentation filters will help the user to better understand their customers and take action on the valuable data. Segments can be found under the Configure section of your Woopra Dashboard.
https://docs.woopra.com/docs/segments
2019-02-15T22:57:56
CC-MAIN-2019-09
1550247479627.17
[]
docs.woopra.com
Show Your Salesforce IoT Data Anywhere in Salesforce with IoT Insights Where: This change applies to Lightning Experience in Enterprise, Performance, Unlimited, and Developer editions. Who: You must have a Salesforce IoT license. Any user who has access to the parent record can view the data in the IoT Insights component. Why: Give your users access to Salesforce IoT data from any Lightning record page in Salesforce. You can also add Salesforce IoT data to your community using the IoT Insights component in the Community Builder. Customers can view information about their own connected devices, and partners can access Salesforce IoT device data to better service your customers. This component is available only for record pages and is available only in English. How: From Setup, enter IoT in the Quick Find box, then select Get Started. In Enable Salesforce IoT, click Enable. In Enable IoT Insights, click Enable. To show Salesforce IoT data in the IoT Insights Lightning component, create and activate an orchestration with your desired variables. Add the IoT Insights component through the Lightning App Builder or the Lightning Community Builder.
http://docs.releasenotes.salesforce.com/en-us/winter19/release-notes/rn_iot_insights.htm
2019-02-15T23:26:17
CC-MAIN-2019-09
1550247479627.17
[array(['release_notes/images/rn_iotx_iot_insights.png', 'IoT Insights component in asset record'], dtype=object)]
docs.releasenotes.salesforce.com
This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation. This is the initial version of the v2.1 API which supports microversions. The V2.1 API is from the REST API users’s point of view exactly the same as v2.0 except with strong input validation. A user can specify a header in the API request: X-OpenStack-Nova-API-Version: <version> where <version> is any valid api version for this API. If no version is specified then the API will behave as if a version request of v2.1 was requested. Added Keypair type. A user can request the creation of a certain ‘type’ of keypair ( ssh or x509) in the os-keypairs plugin If no keypair type is specified, then the default ssh type of keypair is created. Fixes status code for os-keypairs create method from 200 to 201 Fixes status code for os-keypairs delete method from 202 to 204 Exposed additional attributes in os-extended-server-attributes: reservation_id, launch_index, ramdisk_id, kernel_id, hostname, root_device_name, userdata. Exposed delete_on_termination for volumes_attached in os-extended-volumes. This change is required for the extraction of EC2 API into a standalone service. It exposes necessary properties absent in public nova APIs yet. Add info for Standalone EC2 API to cut access to Nova DB. Show the reserved status on a FixedIP object in the os-fixed-ips API extension. The extension allows one to reserve and unreserve a fixed IP but the show method does not report the current status. Before version 2.5, the command nova list --ip6 xxx returns all servers for non-admins, as the filter option is silently discarded. There is no reason to treat ip6 different from ip, though, so we just add this option to the allowed list. A new API for getting remote console is added: POST /servers/<uuid>/remote-consoles { "remote_console": { "protocol": ["vnc"|"rdp"|"serial"|"spice"], "type": ["novnc"|"xpvnc"|"rdp-html5"|"spice-html5"|"serial"] } } Example response: { "remote_console": { "protocol": "vnc", "type": "novnc", "url": "" } } The old APIs ‘os-getVNCConsole’, ‘os-getSPICEConsole’, ‘os-getSerialConsole’ and ‘os-getRDPConsole’ are removed. Check the is_public attribute of a flavor before adding tenant access to it. Reject the request with HTTPConflict error. Add a new locked attribute to the detailed view, update, and rebuild action. locked will be true if anyone is currently holding a lock on the server, false otherwise. Added user_id parameter to os-keypairs plugin, as well as a new property in the request body, for the create operation. Administrators will be able to list, get details and delete keypairs owned by users other than themselves and to create new keypairs on behalf of their users. Exposed attribute forced_down for os-services. Added ability to change the forced_down attribute by calling an update. Exposes VIF net_id attribute in os-virtual-interfaces. User will be able to get Virtual Interfaces net_id in Virtual Interfaces list and can determine in which network a Virtual Interface is plugged into. Remove onSharedStorage parameter from server’s evacuate action. Nova will automatically detect if the instance is on shared storage. Also adminPass is removed from the response body. The user can get the password with the server’s os-server-password action. From this version of the API users can choose ‘soft-affinity’ and ‘soft-anti-affinity’ rules too for server-groups. Exposes new host_status attribute for servers/detail and servers/{server_id}. Ability to get nova-compute status when querying servers. By default, this is only exposed to cloud administrators. Add a new API for triggering crash dump in an instance. Different operation systems in instance may need different configurations to trigger crash dump. Allow the user to set and get the server description. The user will be able to set the description when creating, rebuilding, or updating a server, and get the description as part of the server details. From this version of the API user can call detach and attach volumes for instances which are in shelved and shelved_offloaded state. A new resource servers:migrations added. A new API to force live migration to complete added: POST /servers/<uuid>/migrations/<id>/action { "force_complete": null } From this version of the API users can get the migration summary list by index API or the information of a specific migration by get API. And the old top-level resource /os-migrations won’t be extended anymore. Add migration_type for old /os-migrations API, also add ref link to the /servers/{uuid}/migrations/{id} for it when the migration is an in-progress live-migration. Modify input parameter for os-migrateLive. The block_migration will support ‘auto’ value, and disk_over_commit flag will be removed. Added support of server tags. A user can create, update, delete or check existence of simple string tags for servers by the os-server-tags plugin. Tags have the following schema restrictions: The resource point for these operations is /servers/<server_id>/tags A user can add a single tag to the server by sending PUT request to the /servers/<server_id>/tags/<tag> where <tag> is any valid tag name. A user can replace all current server tags to the new set of tags by sending PUT request to the /servers/<server_id>/tags. New set of tags must be specified in request body. This set must be in list ‘tags’. A user can remove specified tag from the server by sending DELETE request to the /servers/<server_id>/tags/<tag> where <tag> is tag name which user wants to remove. A user can remove all tags from the server by sending DELETE request to the /servers/<server_id>/tags A user can get a set of server tags with information about server by sending GET request to the /servers/<server_id> Request returns dictionary with information about specified server, including list ‘tags’ { 'id': {server_id}, ... 'tags': ['foo', 'bar', 'baz'] } A user can get only a set of server tags by sending GET request to the /servers/<server_id>/tags Response { 'tags': ['foo', 'bar', 'baz'] } A user can check if a tag exists or not on a server by sending GET /servers/{server_id}/tags/{tag} Request returns 204 No Content if tag exist on a server or 404 Not Found if tag doesn’t exist on a server. A user can filter servers in GET /servers request by new filters: These filters can be combined. Also user can use more than one string tags for each filter. In this case string tags for each filter must be separated by comma: GET /servers?tags=red&tags-any=green,orange Added support for the new form of microversion headers described in the Microversion Specification. Both the original form of header and the new form is supported. Nova API hypervisor.cpu_info change from string to JSON object. From this version of the API the hypervisor’s ‘cpu_info’ field will be returned as JSON object (not string) by sending GET request to the /v2.1/os-hypervisors/{hypervisor_id}. Updates the POST request body for the evacuate action to include the optional force boolean field defaulted to False. Also changes the evacuate action behaviour when providing a host string field by calling the nova scheduler to verify the provided host unless the force attribute is set. Updates the POST request body for the live-migrate action to include the optional force boolean field defaulted to False. Also changes the live-migrate action behaviour when providing a host string field by calling the nova scheduler to verify the provided host unless the force attribute is set. Adds an optional, arbitrary ‘tag’ item to the ‘networks’ item in the server boot request body. In addition, every item in the block_device_mapping_v2 array can also have an optional, arbitrary ‘tag’ item. These tags are used to identify virtual device metadata, as exposed in the metadata API and on the config drive. For example, a network interface on the virtual PCI bus tagged with ‘nic1’ will appear in the metadata along with its bus (PCI), bus address (ex: 0000:00:02.0), MAC address, and tag (‘nic1’). Note A bug has caused the tag attribute to no longer be accepted for networks starting with version 2.37 and for block_device_mapping_v2 starting with version 2.33. In other words, networks could only be tagged between versions 2.32 and 2.36 inclusively and block devices only in version 2.32. As of version 2.42 the tag attribute has been restored and both networks and block devices can be tagged again. Support pagination for hypervisor by accepting limit and marker from the GET API request: GET /v2.1/{tenant_id}/os-hypervisors?marker={hypervisor_id}&limit={limit} In the context of device tagging at server create time, 2.33 also removes the tag attribute from block_device_mapping_v2. This is a bug that is fixed in 2.42, in which the tag attribute is reintroduced. Checks in os-migrateLive before live-migration actually starts are now made in background. os-migrateLive is not throwing 400 Bad Request if pre-live-migration checks fail. Added pagination support for keypairs. Optional parameters ‘limit’ and ‘marker’ were added to GET /os-keypairs request, the default sort_key was changed to ‘name’ field as ASC order, the generic request format is: GET /os-keypairs?limit={limit}&marker={kp_name} All the APIs which proxy to another service were deprecated in this version, also the fping API. Those APIs will return 404 with Microversion 2.36. The network related quotas and limits are removed from API also. The deprecated API endpoints as below: '/images' '/os-networks' '/os-tenant-networks' '/os-fixed-ips' '/os-floating-ips' '/os-floating-ips-bulk' '/os-floating-ip-pools' '/os-floating-ip-dns' '/os-security-groups' '/os-security-group-rules' '/os-security-group-default-rules' '/os-volumes' '/os-snapshots' '/os-baremetal-nodes' '/os-fping' Note A regression was introduced in this microversion which broke the force parameter in the PUT /os-quota-sets API. The fix will have to be applied to restore this functionality. Added support for automatic allocation of networking, also known as “Get Me a Network”. With this microversion, when requesting the creation of a new server (or servers) the networks entry in the server portion of the request body is required. The networks object in the request can either be a list or an enum with values: Also, the uuid field in the networks object in the server create request is now strictly enforced to be in UUID format. In the context of device tagging at server create time, 2.37 also removes the tag attribute from networks. This is a bug that is fixed in 2.42, in which the tag attribute is reintroduced. Before version 2.38, the command nova list --status invalid_status was returning empty list for non admin user and 500 InternalServerError for admin user. As there are sufficient statuses defined already, any invalid status should not be accepted. From this version of the API admin as well as non admin user will get 400 HTTPBadRequest if invalid status is passed to nova list command. Deprecates image-metadata proxy API that is just a proxy for Glance API to operate the image metadata. Also removes the extra quota enforcement with Nova metadata quota (quota checks for ‘createImage’ and ‘createBackup’ actions in Nova were removed). After this version Glance configuration option image_property_quota should be used to control the quota of image metadatas. Also, removes the maxImageMeta field from os-limits API response. Optional query parameters limit and marker were added to the os-simple-tenant-usage endpoints for pagination. If a limit isn’t provided, the configurable max_limit will be used which currently defaults to 1000. GET /os-simple-tenant-usage?limit={limit}&marker={instance_uuid} GET /os-simple-tenant-usage/{tenant_id}?limit={limit}&marker={instance_uuid} A tenant’s usage statistics may span multiple pages when the number of instances exceeds limit, and API consumers will need to stitch together the aggregate results if they still want totals for all instances in a specific time window, grouped by tenant. Older versions of the os-simple-tenant-usage endpoints will not accept these new paging query parameters, but they will start to silently limit by max_limit to encourage the adoption of this new microversion, and circumvent the existing possibility of DoS-like usage requests when there are thousands of instances. The ‘uuid’ attribute of an aggregate is now returned from calls to the /os-aggregates endpoint. This attribute is auto-generated upon creation of an aggregate. The os-aggregates API resource endpoint remains an administrator-only API. In the context of device tagging at server create time, a bug has caused the tag attribute to no longer be accepted for networks starting with version 2.37 and for block_device_mapping_v2 starting with version 2.33. Microversion 2.42 restores the tag parameter to both networks and block_device_mapping_v2, allowing networks and block devices to be tagged again. The os-hosts API is deprecated as of the 2.43 microversion. Requests made with microversion >= 2.43 will result in a 404 error. To list and show host details, use the os-hypervisors API. To enable or disable a service, use the os-services API. There is no replacement for the shutdown, startup, reboot, or maintenance_mode actions as those are system-level operations which should be outside of the control of the compute service. The following APIs which are considered as proxies of Neutron networking API, are deprecated and will result in a 404 error response in new Microversion:. The createImage and createBackup server action APIs no longer return a Location header in the response for the snapshot image, they now return a json dict in the response body with an image_id key and uuid value. The request_id created for every inbound request is now returned in X-OpenStack-Request-ID in addition to X-Compute-Request-ID to be consistent with the rest of OpenStack. This is a signaling only microversion, as these header settings happen well before microversion processing. Replace the flavor name/ref with the actual flavor details from the embedded flavor object when displaying server details. Requests made with microversion >= 2.47 will no longer return the flavor ID/link but instead will return a subset of the flavor details. If the user is prevented by policy from indexing extra-specs, then the extra_specs field will not be included in the flavor information. Before version 2.48, VM diagnostics response was just a ‘blob’ of data returned by each hypervisor. From this version VM diagnostics response is standardized. It has a set of fields which each hypervisor will try to fill. If a hypervisor driver is unable to provide a specific field then this field will be reported as ‘None’. Continuing from device role tagging at server create time introduced in version 2.32 and later fixed in 2.42, microversion 2.49 allows the attachment of network interfaces and volumes with an optional tag parameter. This tag is used to identify the virtual devices in the guest and is exposed in the metadata API. Because the config drive cannot be updated while the guest is running, it will only contain metadata of devices that were tagged at boot time. Any changes made to devices while the instance is running - be it detaching a tagged device or performing a tagged device attachment - will not be reflected in the config drive. Tagged volume attachment is not supported for shelved-offloaded instances. The server_groups and server_group_members keys are exposed in GET & PUT os-quota-class-sets APIs Response body. Networks related quotas have been filtered out from os-quota-class. Below quotas are filtered out and not available in os-quota-class-sets APIs from this microversion onwards. There are two changes for the 2.51 microversion: volume-extendedevent name to the os-server-external-eventsAPI. This will be used by the Block Storage service when extending the size of an attached volume. This signals the Compute service to perform any necessary actions on the compute host or hypervisor to adjust for the new volume block device size.. Adds support for applying tags when creating a server. The tag schema is the same as in the 2.26 microversion. os-services Services are now identified by uuid instead of database id to ensure uniqueness across cells. This microversion brings the following changes: GET /os-services returns a uuid in the id field of the response DELETE /os-services/{service_uuid} requires a service uuid in the path The following APIs have been superseded by PUT /os-services/{service_uuid}/: PUT /os-services/disable PUT /os-services/disable-log-reason PUT /os-services/enable PUT /os-services/force-down PUT /os-services/{service_uuid} takes the following fields in the body: status- can be either “enabled” or “disabled” to enable or disable the given service disabled_reason- specify with status=”disabled” to log a reason for why the service is disabled forced_down- boolean indicating if the service was forced down by an external service PUT /os-services/{service_uuid} will now return a full service resource representation like in a GET response os-hypervisors Hypervisors are now identified by uuid instead of database id to ensure uniqueness across cells. This microversion brings the following changes: GET /os-hypervisors/{hypervisor_hostname_pattern}/searchis deprecated and replaced with the hypervisor_hostname_patternquery parameter on the GET /os-hypervisorsand GET /os-hypervisors/detailAPIs. Paging with hypervisor_hostname_patternis not supported. GET /os-hypervisors/{hypervisor_hostname_pattern}/serversis deprecated and replaced with the with_serversquery parameter on the GET /os-hypervisorsand GET /os-hypervisors/detailAPIs. GET /os-hypervisors/{hypervisor_id}supports the with_serversquery parameter to include hosted server details in the response. GET /os-hypervisors/{hypervisor_id}and GET /os-hypervisors/{hypervisor_id}/uptimeAPIs now take a uuid value for the {hypervisor_id}path parameter. GET /os-hypervisorsand GET /os-hypervisors/detailAPIs will now use a uuid marker for paging across cells. GET /os-hypervisors GET /os-hypervisors/detail GET /os-hypervisors/{hypervisor_id} GET /os-hypervisors/{hypervisor_id}/uptime Adds a description field to the flavor resource in the following APIs: GET /flavors GET /flavors/detail GET /flavors/{flavor_id} POST /flavors PUT /flavors/{flavor_id} The embedded flavor description will not be included in server representations. Updates the POST request body for the migrate action to include the the optional host string field defaulted to null. If host is set the migrate action verifies the provided host with the nova scheduler and uses it as the destination for the migration. The 2.57 microversion makes the following changes: personalityparameter is removed from the server create and rebuild APIs. user_dataparameter is added to the server rebuild API. maxPersonalityand maxPersonalitySizelimits are excluded from the GET /limitsAPI response. injected_files, injected_file_content_bytesand injected_file_path_bytesquotas are removed from the os-quota-setsand os-quota-class-setsAPIs. Add pagination support and changes-since filter for os-instance-actions API. Users can now use limit and marker to perform paginated query when listing instance actions. Users can also use changes-since filter to filter the results based on the last time the instance action was updated. Added pagination support for migrations, there are four changes: changes-sincefilter for os-migrations API. Users can now use limitand markerto perform paginate query when listing migrations.. GET /os-migrationsAPI no longer allows additional properties. From this version of the API users can attach a multiattach capable volume to multiple instances. The API request for creating the additional attachments is the same. The chosen virt driver and the volume back end has to support the functionality as well. Exposes flavor extra_specs in the flavor representation. Now users can see the flavor extra-specs in flavor APIs response and do not need to call GET /flavors/{flavor_id}/os-extra_specs API. If the user is prevented by policy from indexing extra-specs, then the extra_specs field will not be included in the flavor information. Flavor extra_specs will be included in Response body of the following APIs: GET /flavors/detail GET /flavors/{flavor_id} POST /flavors PUT /flavors/{flavor_id} Adds host (hostname) and hostId (an obfuscated hashed host id string) fields to the instance action GET /servers/{server_id}/os-instance-actions/{req_id} API. The display of the newly added host field will be controlled via policy rule os_compute_api:os-instance-actions:events, which is the same policy used for the events.traceback field. If the user is prevented by policy, only hostId will be displayed. Adds support for the trusted_image_certificates parameter, which is used to define a list of trusted certificate IDs that can be used during image signature verification and certificate validation. The list is restricted to a maximum of 50 IDs. Note that trusted_image_certificates is not supported with volume-backed servers. The trusted_image_certificates request parameter can be passed to the server create and rebuild APIs: POST /servers POST /servers/{server_id}/action (rebuild) The trusted_image_certificates parameter will be in the response body of the following APIs: GET /servers/detail GET /servers/{server_id} PUT /servers/{server_id} POST /servers/{server_id}/action (rebuild) Enable users to define the policy rules on server group policy to meet more advanced policy requirement. This microversion brings the following changes in server group APIs: policyand rulesfields in the request of POST /os-server-groups. The policyrepresents the name of policy. The rulesfield, which is a dict, can be applied to the policy, which currently only support max_server_per_hostfor anti-affinitypolicy. policyand rulesfields will be returned in response body of POST, GET /os-server-groupsAPI and GET /os-server-groups/{server_group_id}API. policiesand metadatafields have been removed from the response body of POST, GET /os-server-groupsAPI and GET /os-server-groups/{server_group_id}API. Add support for abort live migrations in queued and preparing status for API DELETE /servers/{server_id}/migrations/{migration_id}. The changes-before filter can be included as a request parameter of the following APIs to filter by changes before or equal to the resource updated_at time: GET /servers GET /servers/detail GET /servers/{server_id}/os-instance-actions GET /os-migrations Adds the volume_type parameter to block_device_mapping_v2, which can be used to specify cinder volume_type when creating a server. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/nova/latest/reference/api-microversion-history.html
2019-02-15T23:02:45
CC-MAIN-2019-09
1550247479627.17
[]
docs.openstack.org
All Files About the Cutter Tool T-HFND-004-007 The Cutter tool lets you cut a drawing to move, copy, or delete it. You can use it to scale or reposition the portion of a flattened drawing. You can also use it to trim strokes using a single gesture. The Cutter tool can be used on interesting brush strokes to cut the excess artwork and create a corner. NOTETo learn more about the Cutter tool options, see Cutter Tool Properties. Authors Marie-Eve Chartrand Christopher Diazchrisdiazart.com
https://docs.toonboom.com/help/harmony-15/essentials/drawing/about-cutter-tool.html
2019-02-15T23:48:58
CC-MAIN-2019-09
1550247479627.17
[]
docs.toonboom.com
Overview of Warehouses¶ Warehouses are required for queries, as well as all DML operations, including loading data into tables. A warehouse is defined by its size, as well as the other properties that can be set to help control and automate warehouse activity. Warehouses can be started and stopped at any time. They can also be resized at any time, even while running, to accommodate the need for more or less compute resources, based on the type of operations being performed by the warehouse. In this Topic: - Warehouse Size - Auto-suspension and Auto-resumption - Query Processing and Concurrency - Warehouse Usage in Sessions Warehouse Size¶ Size specifies the number of servers that comprise each cluster in a warehouse. Snowflake supports the following warehouse sizes: Impact on Credit Usage and Billing¶ As shown in the above table, there is a one-to-one correspondence between the number of servers in a warehouse cluster and the number of credits the cluster consumes (and is, therefore, billed) for each full hour that the warehouse runs; however, note that Snowflake utilizes per-second billing (with a 60-second minimum each time the warehouse starts) so warehouses are billed only for the credits they actually consume. The total number of credits billed depends on how long the warehouse runs continuously. For comparison purposes, the following table shows the billing totals for three different size warehouses based on their running time (totals rounded to the nearest 1000th of a credit): Note For a multi-cluster warehouse, the number of credits billed is calculated based on the number of servers per cluster and the number of clusters that run within the time period. For example, if a 3X-Large multi-cluster warehouse runs 1 cluster for one full hour and then runs 2 clusters for the next full hour, the total number of credits billed would be 192 (i.e. 64 + 128). Multi-cluster warehouses are an Enterprise Edition feature. Impact on Data Loading¶ Increasing the size of a warehouse does not always improve data loading performance. Data loading performance is influenced more by the number of files being loaded (and the size of each file) than the size of the warehouse. Tip Unless you are bulk loading a large number of files concurrently (i.e. hundreds or thousands of files), a smaller warehouse (Small, Medium, Large) is generally sufficient. Using a larger warehouse (X-Large, 2X-Large, etc.) will consume more credits and may not result in any performance increase. For more data loading tips and guidelines, see Data Loading Considerations. Impact on Query Processing¶ The size of a warehouse can impact the amount of time required to execute queries submitted to the warehouse, particularly for larger, more complex queries. In general, query performance scales linearly with warehouse size because additional compute resources are provisioned with each size increase. If queries processed by a warehouse are running slowly, you can always resize the warehouse to provision more servers. The additional servers do not impact any queries that are already running, but they are available for use by any queries that are queued or newly submitted. Tip Larger is not necessarily faster for small, basic queries. For more warehouse tips and guidelines, see Warehouse Considerations. Auto-suspension and Auto-resumption¶ A warehouse can be set to automatically resume or suspend, based on activity: - If auto-suspend is enabled, the warehouse is automatically suspended if the warehouse is inactive for the specified period of time. - If auto-resume is enabled, the warehouse is automatically resumed when any statement that requires a warehouse is submitted and the warehouse is the current warehouse for the session. These properties can be used to simplify and automate your monitoring and usage of warehouses to match your workload. Auto-suspend ensures that you do not leave a warehouse running (and consuming credits) when there are no incoming queries. Similarly, auto-resume ensures that the warehouse starts up again as soon as it is needed. Note Auto-suspend and auto-resume apply only to the entire warehouse and not to the individual clusters in the warehouse. For a multi-cluster warehouse: - Auto-suspend only occurs when the minimum number of clusters is running and there is no activity for the specified period of time. The minimum is typically 1 (cluster), but could be more than 1. - Auto-resume only applies when the entire warehouse is suspended (i.e. no clusters are running). Query Processing and Concurrency¶ The number of queries that a warehouse can concurrently process is determined by the size and complexity of each query. As queries are submitted, the warehouse calculates and reserves the compute resources needed to process each query. If the warehouse does not have enough remaining resources to process a query, the query is queued, pending resources that become available as other running queries complete. Snowflake provides some object-level parameters that can be set to help control query processing and concurrency: Note If queries are queuing more than desired, another warehouse can be created and queries can be manually redirected to the new warehouse. In addition, resizing a warehouse can enable limited scaling for query concurrency and queuing; however, warehouse resizing is primarily intended for improving query performance. To enable fully automated scaling for concurrency, Snowflake recommends multi-cluster warehouses, which provide essentially the same benefits as creating additional warehouses and redirecting queries, but without requiring manual intervention. Multi-cluster warehouses are an Enterprise Edition feature. Warehouse Usage in Sessions¶ When a session is initiated in Snowflake, the session does not, by default, have a warehouse associated with it. Until a session has a warehouse associated with it, queries cannot be submitted within the session. Default Warehouse for Users¶ To facilitate querying immediately after a session is initiated, Snowflake supports specifying a default warehouse for each individual user. The default warehouse for a user is used as the warehouse for all sessions initiated by the user. A default warehouse can be specified when creating or modifying the user, either through the web interface or using CREATE USER/ALTER USER. Default Warehouse for Client Utilities/Drivers/Connectors¶ In addition to default warehouses for users, any of the Snowflake clients (SnowSQL, JDBC driver, ODBC driver, Python connector, etc.) can have a default warehouse: - SnowSQL supports both a configuration file and command line option for specifying a default warehouse. - The drivers and connectors support specifying a default warehouse as a connection parameter when initiating a session. For more information, see Connecting to Snowflake. Precedence for Warehouse Defaults¶ When a user connects to Snowflake and start a session, Snowflake determines the default warehouse for the session in the following order: Default warehouse for the user, » overridden by… Default warehouse in the configuration file for the client utility (SnowSQL, JDBC driver, etc.) used to connect to Snowflake (if the client supports configuration files), » overridden by… Default warehouse specified on the client command line or through the driver/connector parameters passed to Snowflake. Note In addition, the default warehouse for a session can be changed at any time by executing the USE WAREHOUSE command within the session.
https://docs.snowflake.net/manuals/user-guide/warehouses-overview.html
2019-02-15T23:23:26
CC-MAIN-2019-09
1550247479627.17
[]
docs.snowflake.net
Serial Controller Drivers This section describes the Serial.sys and Serenum.sys drivers, and versions 1 and 2 of the serial framework extension (SerCx and SerCx2). Drivers and applications use these inbox Windows components to control serial ports and to communicate with peripheral devices that are connected to these ports. A serial port is a hardware communication interface on a serial controller, which is a 16550 universal asynchronous receiver/transmitter device (UART) or compatible device. In this section Send comments about this topic to Microsoft
https://docs.microsoft.com/en-us/previous-versions/ff546939(v=vs.85)
2019-02-15T23:47:12
CC-MAIN-2019-09
1550247479627.17
[]
docs.microsoft.com
You can enable a vCloud Director organization VDC for replication.vCloudDirector Organization Administrator Role. Procedure - Create an SSH connection to the vCloud Availability Installer Appliance. - To enable an organization VDC for replication, run the following commands. After the process finishes, you get an OK message.
https://docs.vmware.com/en/vCloud-Availability-for-vCloud-Director/2.0/com.vmware.vcavcd.admin.doc/GUID-BB2753D1-A091-456D-963A-02CCB350A4B0.html
2019-02-15T22:48:18
CC-MAIN-2019-09
1550247479627.17
[]
docs.vmware.com
Built-in CLI applications¶ You can use any pump.io client application you want to interact with pump.io servers. However, the pump.io package comes with some samples to get you started and you can find some more in the repository. pump-register-app¶ First use this tool to create the credentials file $ ./bin/pump-register-app -t <APPNAME> <APPNAME> will be the name of the client app that pump-register-app registers with the server. This will create the file ~/.pump.d/<SERVER>.json that contains your credentials. { "client_id":"XXXX", "client_secret":"YYYYY", "expires_at":0 } It will also add an entry into the server database where you will find the clientID. (Of course, if you use the memory Databank driver the data will be lost between server runs, and you’ll need to rerun the configuration.)
https://pumpio.readthedocs.io/en/latest/builtin-cli.html
2019-02-15T23:40:38
CC-MAIN-2019-09
1550247479627.17
[]
pumpio.readthedocs.io
Release Notes for Build 49.23 of NetScaler MAS 12.1 Release.1 releases. - The [# XXXXXX] labels under the issue descriptions are internal tracking IDs used by the Citrix ADC team. What's New?[# 702268] - Viewing security insight for Citrix ADC instances with an application firewallNetScaler MAS now supports security insight from all Citrix ADC instances that have application firewall configured on them. For more information, see[# 708013] - Self-diagnostic check for analyticsNetScaler MAS[# 709970] Networks - Ability to send event notifications to SlackEarlier, in NetScaler MAS GUI you had an option to send email notifications for events. You can now send an event notification to Slack channel also.Configure the required Slack channel by providing the profile name and the webhook URL in NetScaler MAS GUI. The event notifications are then sent to this channel. For more details, see Adding event rule actions section in[# 656472] - Test button for email configuration for NetScaler MAS[# 705515] - Role Based Access Control on GSLB domainsYou can now allow only authorized users to perform GSLB configuration using StyleBooks as RBAC is currently supported on GSLB domains also.NetScaler MAS now supports a new entity called “DNS Domain Name.” In NetScaler MAS,.[# 706988, 708419] - Configurable auto license support for non-addressable virtual serversNetS NetScaler MAS under Networks > Licenses > System Licenses is “Auto-select non-addressable Virtual Servers.” Enabling this option now allows you to explicitly specify that the licensing should include non-addressable virtual servers also. For more details, see MAS, by default, still does not auto select non-addressable virtual servers for licensing.Application analytics (App Dashboard) is the only analytics supported currently on licensed non-addressable virtual servers.[# 707843] - Support for HTTP and HTTPS port customizationYou can now specify non-default ports in NetScaler MAS to send HTTP and HTTPS requests to a Citrix ADC CPX instance. The non-default HTTP and HTTPS ports are configured while creating a Citrix ADC Profile. For more details, see How to create a Citrix ADC Profile section in[# 708213] - Ability to tag Citrix ADC instancesTags are terms or keywords that you can assign to a Citrix ADC instance to associate some additional description about the Citrix ADC instance. NetScaler MAS now allows you to associate your Citrix ADC instances with tags. These tags allow you to group, identify, and search for the Citrix ADC instances. For more details, see Create tags and assign to instances. For more details, see[# 708603] - Improvements in Citrix ADC pooled capacity featureA few modifications have been made in the pooled licenses page in NetScaler MAS NetScaler MAS. For more details, see[# 709975, 711457] - Improved functionality to search Citrix ADC instancesConsider a scenario where NetScaler MAS is managing many Citrix ADC instances. You might want the flexibility to search the inventory of instances based on some search parameters. NetScaler MAS[# 709997] Orchestration - Deploying OpenStack LBaaS configurations through StyleBooksIn the OpenStack orchestration workflow, NetScaler MAS[# 702345, 702340] - Ability to enable CICO and pooled license support for OpenStack environmentThe service package page in the Orchestration feature in NetScaler MAS is enhanced to provide the license that is required to be installed on the Citrix ADC instances that are created on demand. The licenses provided can be either CICO or pooled capacity license. For more details, see[#. NetScaler MAS then configures the Citrix ADC instance to client networks by adding a SNIP in that network, and will further add a default route to the client network gateway. This enables the instance to reach the servers through the client gateway. For more details, see[# 709950] StyleBooks - Back-end SSL Protocol support by SharePoint StyleBookSharePoint StyleBook now supports SSL to bind service groups (SharePoint application servers) to the target load balancing virtual servers. For more details, see[# 706507] - Using StyleBooks to create load balancing virtual servers with an application firewallYou can now automate the configuration of Citrix ADC WAF (Web Citrix Web App Firewall) feature using the new default WAF StyleBook in NetScaler MAS..[# 708597] System - Improvements to NetScaler MAS user interfaceNetScaler MASnodes NetScaler MAS. • Seach option is now available on most of the pages where you can search for a particular item. Some of the items that you can search are as follows: • an instance • an instance group • an event • a rule • a config job • a networking entity • a StyleBook[# 710010] Fixed Issues Analytics - In Security Insight, the Search functionality in the application summary table does not work.[# 630276, 685673] - Citrix ADC instance might occasionally crash when there is a toggle between enabling/disabling AppFlow configuration.[# 702155] - The combined graphical view in HDX insight shows incorrect time zone.[# 703906] - Aggregation of Gateway Insight fails if HDX Insight is not enabled, because the the report time was not getting set as required. This build fixes this issue.[# 709233] - NetScaler MAS upgrade from 12.0 to 12.1 fails for this scenario: the database summarization configuration (days to persist hourly data) is set for more than ten days (default is one day) for Analytics.[# 710501] - Agent registration fails because of the Unicode characters seen in the location details.[#.[# 712110] - In some cases, the Citrix Gateway appliance dumps core during the authentication if the following conditions are met:- The Citrix ADC appliance is configured for nFactor authentication.- The Gateway Insight feature is enabled for the appliance.[# 713011, 713168] - You cannot configure AppFlow on Citrix ADC instances when the virtual server name has "blank" spaces.[# 713133] - It is not possible to see information in Web insight when you access NetScaler MAS with “read-only” privileges.[# 713404] - NetScaler MAS analytics might not show web insight reports consistently. Delay in accessing GeoMap location sometimes delays the aggregation of analytics reports for web insight.[# 713648] Applications - Citrix ADC exports "multiple application terminate" records for the same application. This causes NetScaler MAS afdecoder process to crash.[# 709462] High Availability - When a Citrix ADC instance in high availability mode is assigned to a user group, and if the instance pair fails over, the instance is no longer assigned to the user group.[#.[# 709770, 709768] - After a manual failover, NetScaler MAS doesn't receive SNMP v3 traps. This is because, the configuration changes in the Citrix ADC instance are not complete for the new primary node in the high availability, and the Citrix ADC instance is not sending the traps to NetScaler MAS after a failover.[# 709802] - If you configure one node in a pair of Citrix ADC instances in high availability mode with the IP address in the range 171.31.200.x, this pair of Citrix ADC instances is not discovered by NetScaler MAS.[# 710589] Networks - There might be a situation where a table spills over to multiple pages because NetScaler MAS.[# 699718] - Reports are not visible after NetScaler MAS fails to respond when the EventFilterManager::execute_action command is run.[# 701430, 710050] - Duplicate entries are displayed when you filter all listed entries in network function such as load balancing, content switching, cache redirection, and others.[# 704095] - Though you can see system notifications about "userlogout," you might not receive any email notifications.[# 704344] - The Networks event digest report data is getting truncated due to formatting issues.[# 704980] - You cannot configure AppFlow on virtual servers if NetScaler MAS is configured with any interface other then 0/1.[# 705330] - Consider a scenario where you have RBAC access to only Networks, Analytics, and System nodes. The default behavior is that the first node in the navigation pane, that is, Networks should be the landing page when you access NetScaler MAS. But now the Analytics node is the landing page.[# 705347] - Do not include white spaces in configuration audit template names.[# 708003] - When you select a CB5000 Citrix ADC SD-WAN instance and click Current Configuration, NetScaler MAS displays a message that says there is an error in retrieving CloudBridge Current Config. This build fixes this issue.[# 708771] - When you add Citrix ADC SD WAN WO instance to NetScaler MAS, SNMP connection is not successful, and the GUI becomes unresponsive.[# 709146] - Citrix ADC instances in version 12.1 build 48.13 added in NetScaler MAS.[# 710564, 673744] - You cannot schedule exporting of reports in Network Reporting because the export input parameter for the external user list exceeds the limit of 4096 chars.[# 710872] - In accordance with the new naming conventions, VPN in Configure Analytics view is renamed as "Citrix] - Auto-rollback in NetScaler MAS doesn't happen when you run wrong commands. For example, "rm" commands. You get a "null" if rollback command is not available.[# 713923] - If you use the name of an existing configuration template as the name of the configuration job, you might not be able to edit the job later.[# 713926, 713927] Orchestration - NetScaler MAS displays an unknown system error when a service package is created for the first time for OpenStack. This error occurs when tenants are being assigned to the service package.[# 709947] StyleBooks - If there is an error while importing a StyleBook in 'raw' format, the scroll bar in the StyleBook editor stops working. Sometimes, the scroll bar doesn't work after deleting a StyleBook.[# 710372] System - When you attempt to restore NetScaler MAS, it may fail to respond, and the following error is displayed: "Restore exception: File access error: directory not empty: /var/mps/tenants/root/device_backup" when restore is attempted.[# 705132] - To defend against ClickJacking attacks, configure a list of allowed hosts. The content security policy (CSP) frame-ancestors and X-Frame-Options are not included in the whitelist. Add them explicitly to the whitelist.[# 706431, 705731] - When you try to connect to Citrix ADC instances using SSH, the NetScaler MAS subsystem crashes. This is fixed in this build.[# 707100] - NetScaler MAS fails to prevalidate the Citrix ADC instance if the NTP details are added in rc.netscaler file. You can now select those Citrix ADC instances and remove them while upgrading the instances.[# 708466] - When you upgrade NetScaler MAS from 12.0 to 12.1, only one non-default site is preserved, and the rest are deleted. You must create the sites again. There is no workaround for this issue.[# 710509] - Certificates that have passphrase in them have difficulty connecting to the database.[# 710876] - When a Citrix ADC instance is removed from NetScaler MAS, the backup files associated with the instance are not removed.[# 711302] - In NetScaler MAS, auto purging of data in the database does not free up disk space.[# 711405] - While upgrading NetScaler MAS.[# 712073, 714304] Release history - Build 49.23 (2018-08-28) (Current build)
https://docs.citrix.com/en-us/citrix-application-delivery-management-software/12-1/downloads/NetScaler-MAS-12-1-49-23.html
2019-02-16T00:12:29
CC-MAIN-2019-09
1550247479627.17
[]
docs.citrix.com
Using objects that implement IDisposable The common language runtime's garbage collector reclaims the memory used by managed objects, but types that use unmanaged resources implement the IDisposable interface to allow the memory allocated to these unmanaged resources to be reclaimed. When you finish using an object that implements IDisposable, you should call the object's IDisposable.Dispose implementation. You can do this in one of two ways: With the C# usingstatement or the Visual Basic Usingstatement. By implementing a try/finallyblock. The using statement The using statement in C# and the Using statement in Visual Basic simplify the code that you must write to create and clean up; using System.IO; public class Example { public static void Main() { Char[] buffer = new Char[50]; using (StreamReader s = new StreamReader("File1.txt")) { int charsRead = 0; while (s.Peek() != -1) { charsRead = s.Read(buffer, 0, buffer.Length); // // Process characters read. // } } } } Imports System.IO Module Example Public Sub Main() Dim buffer(49) As Char Using s As New StreamReader("File1.txt") Dim charsRead As Integer Do While s.Peek() <> -1 charsRead = s.Read(buffer, 0, buffer.Length) ' ' Process characters read. ' Loop End Using End Sub End Module Note that; using System.IO; public class Example { public static void Main() { Char[] buffer = new Char[50]; { StreamReader s = new StreamReader("File1.txt"); try { int charsRead = 0; while (s.Peek() != -1) { charsRead = s.Read(buffer, 0, buffer.Length); // // Process characters read. // } } finally { if (s != null) ((IDisposable)s).Dispose(); } } } } Imports System.IO Module Example Public Sub Main() Dim buffer(49) As Char '' Dim s As New StreamReader("File1.txt") With s As New StreamReader("File1.txt") Try Dim charsRead As Integer Do While s.Peek() <> -1 charsRead = s.Read(buffer, 0, buffer.Length) ' ' Process characters read. ' Loop Finally If s IsNot Nothing Then DirectCast(s, IDisposable).Dispose() End Try End With End Sub End Module(. This may be your personal coding style, or you might want to do this for one of the following reasons: To include a catchblock to handle any exceptions thrown in the tryblock. Otherwise, any exceptions thrown by the usingstatement are unhandled, as are any exceptions thrown within the usingblock if a try/catchblock isn't present..(); } } } Imports System.Globalization Imports System.IO Module Example Public Sub Main() Dim sr As StreamReader = Nothing Try sr = New StreamReader("file1.txt") Dim contents As String = sr.ReadToEnd() Console.WriteLine("The file has {0} text elements.", New StringInfo(contents).LengthInTextElements) sr IsNot Nothing Then sr.Dispose() End Try End Sub End Module You can follow this basic pattern if you choose to implement or must implement a try/finally block, because your programming language doesn't support a using statement but does allow direct calls to the Dispose method. See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/using-objects
2019-02-15T22:53:47
CC-MAIN-2019-09
1550247479627.17
[]
docs.microsoft.com
Acquisition board The pyPhotometry acquisition board uses a Micropython microcontroller to acquire two digital and two analog signals, and to generate analog control signals for two built in LED driver circuits. The acquisition board draws power from the Micropython's USB connector and requires no additional power supply. Safety To prevent short circuits due to contact with metal objects, the board should be securely mounted using M3 bolts and insulating spacers. The mounting holes on the acquisition board have a 50 x 75mm spacing so the board can be mounted directly on a Thorlabs metric optical breadboard using M6 to M3 thread adaptors. Though the LED drivers are relatively low power, care should be taken to avoid shining light directly into the eye. This is particularly important with LEDs whose wavelength lies outside the visible spectrum. Analog inputs The two BNC analog inputs SIGNAL 1 and SIGNAL 2 receive fluorescence signals as analog voltages from the photodetectors. The signals pass through an RC lowpass filter with a cuttoff frequency of 10KHz and then are read by the Micropython board's analog to digital converters (ADCs). The Micropython ADCs have a 0 - 3.3V input range. As the ADC pins are not 5V tolerant, clamp diodes to the 3.3V rail are used to prevent damage if the signal rises above 3.3V. Oversampling is used to increace the 12-bit resolution of the ADCs to 15 bits - i.e. to generate each sample the ADC is read 64 times and averaged, giving an extra 3 bits of resolution. Digital inputs The two BNC digital inputs are compatible with 5V or 3.3V logic and are typically used to acquire sync pulses or timestamps, e.g. generated by behavioural hardware. The digital inputs connect directly to pins on the microcontroller. LED drivers The aquisition board has two constant current LED driver circuits which are controlled by the Micropython's digtal to analog converters (DACs), allowing the LED current to be adjusted between 0 - 100mA. The LED driver outputs are M8 connectors that are compatible with either Doric or Thor Labs connectorized LEDs. The LED driver circuits are voltage controlled current sinks in which an op amp adjusts the voltage at an MOSFET's gate to bring the voltage across a sense resistor in series with the LED into agreement with the control voltage from the microcontroller. A resistor between the 3.3V rail and the inverting input of the op amp ensures the MOSFET is fully turned off when the control voltage is 0. The LED current can be monitored by measuring the voltage between the SENSE1 or SENSE2 and GND connections on the acquisition board, which gives the voltage across the 4.7 ohm sense resistors in series with the respective LEDs. Assembly instructions Acquisition board The acquisition board can be purchased from the Open Ephys store for €350 or built from components. The design files for the acquitition board are in the hardware repository. To assemble the board from components you will need to get the PCB printed (using either the Gerber or Eagle files) and order the electronic components listed in the BOM (Farnell part numbers are provided). Assembling the acquisition board requires both surface mount and through hole soldering. The surface mount soldering can be done either using a reflow oven or hand soldering. Hand soldering of surface mount components requires a bit of practice but there are lots of tutorials online. Solder all the surface mount components before soldering the through hole components as once the through hole components are in place they will get in the way. The micropython board is attached to the acqusition board using the male and female 16 way headers. First solder the female headers onto the micropython board, then insert the male headers into the female headers, mount the micropython on the acquisition board and solder the male headers. Optical components To make a complete photometetry system the acquisition board needs to be paired with LEDs, photorecievers, filter cubes and other optical components. A parts list for a set of additional components that can be used with the aquisition board for green/red two colour experiments (e.g. GCaMP/TdTomato) is provided in the resources section of the docs, and as a Excel file in the hardware repository. If you plan to use the time-division multiplexed illumination mode, the maximum sampling rate that can be achieved without crosstalk between the signals will depend on the bandwidth of the photoreievers. We use Newport 2151 photorecievers in DC coupled mode, which have a bandwidth of 0-750 Hz. The optical components for the red/green system are positioned and connected as indicated below: To assemble the system: Attach the minicube to the optical breadboard using the clamp (CL3/M) and 45mm M6 bolts. Attach the acquisition board to the breadboard using the M6-M3 screw adaptors, M3 spacers and 10mm M3 screws. Screw the adaptors into the breadboard, then attach the acquisition board with the spacers between the board and the adaptors. Attach the Newport photorecievers to the breadboard using the pillars (TRP14/M), clamping forks (MSC2) and 12mm M6 bolts. Attach the LEDs to the breadboard using the 12mm M6 bolts. Connect the photorecievers to the minicube using the 30cm, 600um core 0.48NA optic fibers (MFP_600/630/LWMJ-0.48_0.3m_FCM-FCM) Connect the LEDs to the minicube using the 30cm, 200um core 0.48NA optic fibers (MFP_200/220/LWMJ-0.48_0.3m_FCM-FCM) Connect the photorecivers to the acquisition board analog inputs using the 30cm BNC cables. Connect the LEDs to the acquisition board LED outputs using their built in cables. It is not necessary to connect the power supplies for the LED cooling fans as the maximum current output by the acquisition board is only 10% of the LEDs rated current. Connect the pigtailed rotary joint to the sample port of the minicube and connect the fiber patchcord to the rotary joint using the FC-FC adapter. Modifying the pyPhotometry board for higher LED current The LED drivers on the standard pyPhotometry board can output up to 100mA currents. While this is fine for many applications, higher LED currently may be preferable in some cases - e.g. with constructs that need higher light intensities or use wavelengths where only less efficient LEDs are available. We plan to release a version of the board with higher maximum LED current, but have also looked into the possibility of increasing the maximum LED current that can be output from the standard board. With a simple modification to the board - changing one resistor value per LED driver, it is possible to increase the maximum current that can be used in the time-division illumination modes up to 400mA. Modifiying the board risks damaging it and is done entierly at users risk. The schematic of the LED driver circuit is shown below. The current through the LED is controlled by the MOSFET (labled Q1), whose reistance is determined by the voltage applied to its gate by the opamp (IC1). The opamp adjusts the gate voltage, and hence LED current, to bring the voltage across the sense resistor (R5) into agreement with the voltage at the + input to the opamp, which is controlled by the pyboard DAC (connected to wire CTRL1). The upshot of this is that the LED current is proportional to the control voltage output by the pyboard DAC, with a slope determined by the value of the sense resistor R5. If you halve the value of the sense resistor R5, the LED current for a given control voltage will be doubled. The maximum current that can safely be output is limited by power dissipation in the MOSFET and sense resistor. LEDs typically have a forward voltage of 1.8-3.3V depending on the LED wavelength and current. The supply voltage is 5V, and the remaining voltage drop will occur across the MOSFET and sense resistor, dissipating power as heat. Higher LED currents result in more power dissipation and hence more heating, putting an upper limit on the current that can be delivered without damaging these components. This is approximagely 200mA continuous current, but will depend a bit on the exact LED used, so limiting max currents to 100mA is recomended in continous mode. However, in the time division illumination modes, each LED is only on for a small fraction of the time, hence both average current and power dissipation are much lower, and higher LED on currents are possible - we suggest 400mA as an upper limit. To modify the system to use higher currents you would need to replace the 4.7Ω sense resistors R5 and R6 (for LEDs 1 and 2 respectively), circled in yellow on the diagram below with a 1.2Ω 0805 package resistor, e.g. Farnell part number 1717800. Changing the resistor values from 4.7 to 1.2Ω without modifying the code will result in the actual LED currents being 3.9x higher than those specified in the GUI and data files. To modify the code so that the LED currents are specified correctly, you would need to change the slope of the LED_calibration variable defined at line 16 in upy/photometry_upy.py from its default value of 38.15 to 9.78. You also need to modify the maxium LED current that can be set in the GUI from the default 100mA to 400mA, by changing lines 119 and 120 in GUI/GUI_main.py where the range of the current controls is set. These changes should be done on the latest version of the code currently on github, as earlier versions (including the last official release v0.3), use a single byte to send LED current commands to the board during acqusition so can't handle LED current values above 256mA. Ideally the code should be modified to only allow currents above 100mA to be used in time-division but not continuous mode. This is beyond the scope of this document, but something we plan to implement in a future release.
https://pyphotometry.readthedocs.io/en/latest/user-guide/hardware/
2022-01-16T19:39:32
CC-MAIN-2022-05
1642320300010.26
[array(['../../media/board_photo.jpg', 'Acquisition board'], dtype=object) array(['../../media/optical_parts_diagram.jpg', 'pyPhotometry GUI'], dtype=object) array(['../../media/LED_driver_schematic.png', 'LED driver schematic'], dtype=object) array(['../../media/sense_resistors.png', 'Sense resistors'], dtype=object)]
pyphotometry.readthedocs.io
Return binary media from a Lambda proxy integration To return binary media from an AWS Lambda proxy integration, base64 encode the response from your Lambda function. You must also configure your API's binary media types. To use a web browser to invoke an API with this example integration, set your API's binary media types to */*. API Gateway uses the first Accept header from clients to determine if a response should return binary media. To return binary media when you can't control the order of Accept header values, such as requests from a browser, set your API's binary media types to */* (for all content types). The following example Python 3 Lambda function can return a binary image from Amazon S3 or text to clients. The function's response includes a Content-Type header to indicate to the client the type of data that it returns. The function conditionally sets the isBase64Encoded property in its response, depending on the type of data that it returns. import base64 import boto3 import json import random s3 = boto3.client('s3') def lambda_handler(event, context): number = random.randint(0,1) if number == 1: response = s3.get_object( Bucket=' bucket-name', Key=' image.png', ) image = response['Body'].read() return { 'headers': { "Content-Type": "image/png" }, 'statusCode': 200, 'body': base64.b64encode(image).decode('utf-8'), 'isBase64Encoded': True } else: return { 'headers': { "Content-type": "text/html" }, 'statusCode': 200, 'body': "<h1>This is text</h1>", } To learn more about binary media types, see Working with binary media types for REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/lambda-proxy-binary-media.html
2022-01-16T20:47:39
CC-MAIN-2022-05
1642320300010.26
[]
docs.aws.amazon.com
Add a document reference in the query question Adding a document reference inserts a link to an existing document in the engagement file. You can add a reference to a document in the query question. For example, you can add a link to a letter or checklist. To add a document reference in the query question: Go to the query question where you want to insert a document reference. Select the text field to display the formatting toolbar. Select Insert Document Reference ( ). A list of the available documents in the engagement file displays. You can select Show hidden documents to select a hidden document if needed. Select the document you want to reference from the list. A reference to the selected document is created and added in the query question.
https://docs.caseware.com/2020/webapps/31/fr/Engagements/Template-and-Authoring/Add-a-document-reference-in-the-query-question.htm?region=us
2022-01-16T19:15:53
CC-MAIN-2022-05
1642320300010.26
[]
docs.caseware.com
Version 0.10¶ Added a proper setup views for the document grouping functionality. Document grouping is now called smart linking as it relates better to how it actually works. The data base schema was changed and users must do the required: $ ./manager syncdb for the new tables to be created. Grappelli is no longer required as can be uninstalled. New smarter document preview widget that doesn’t allow zooming or viewing unknown or invalid documents. New office document converter, requires: LibreOffice () unoconv [version 0.5] () The new office documents converter won’t convert files with the extension .docx because these files are recognized as zip files instead. This is an issue of the libmagic library. New configuration option added CONVERTER_UNOCONV_USE_PIPEthat controls how unoconv handles the communication with LibreOffice. The default of Truecauses unoconv to use pipes, this approach is slower than using TCP/IP ports but it is more stable. Initial REST API that exposes documents properties and one method, this new API is used by the new smart document widget and requires the package djangorestframework, users must issue a: $ pip install -r requirements/production.txt to install this new requirement. MIME type detection and caching performance updates. Updated the included version of jQueryto 1.7 Updated the included version of JqueryAsynchImageLoaderto 0.9.7 Document image serving response now specifies a MIME type for increased browser compatibility. Small change in the scheduler that increases stability. Russian translation updates Sergey Glita Improved and generalized the OCR queue locking mechanism, this should eliminate any possibility of race conditions between Mayan EDMS OCR nodes. Added support for signals to the OCR queue, this results in instant OCR processing upon submittal of a document to the OCR queue, this works in addition to the current polling processing which eliminates the possibility of stale documents in the OCR queue. Added multiple document OCR submit link Re enabled Tesseract language specific OCR processing and added a one (1) time language neutral retry for failed language specific OCR
https://docs.mayan-edms.com/releases/0.10.html
2022-01-16T19:25:20
CC-MAIN-2022-05
1642320300010.26
[]
docs.mayan-edms.com
Check Command: check_truepool¶ TruePool.io is a trusted Chia cryptocurrency farming (mining) pool. This NEMS check command allows you to check your TruePool.io farmer status to ensure your Chia farm is online and actively farming to the pool. Learn more about what makes TruePool.io unique in this video. check_truepool expects just one argument (Launcher ID) and responds accordingly. You can create multiple check_truepool advanced services to check any number of Launcher IDs. For example, monitor your own and your family member’s farm status. This check command requires NEMS Linux 1.6+. Expected Responses¶ OK- Farm is online and actively gaining points on Truepool.io CRITICAL- Either “Not found” (you provided an invalid Launcher ID) or “Farm Offline” (your farm has gained 0 points since the last check) Sample Output: OK - Points: 830 (24 hrs), 243 (this block) / Share: 0.0810% / Diff: 1 [Cache] If your check_truepool response includes [Cache] it means you are running the check command too frequently. Check your farm and determine approximately how long it takes you to solve a partial, then set your check_truepool to run with a cushion to allow variance. For example, if it takes you 60 seconds to solve a partial, you should not be running the check more frequently than every 3 minutes or so (if you need that level of monitoring). I’d say running every 10 minutes would be appropriate for most users. Configuration¶ Obtain your Chia launcher ID. See this truepool.io knowledgebase article for help with this. NEMS Configurator Setup - Add a new Advanced Service - Give the service a name such as “TruePool (Robbie)” - Give the service a description such as “Chia Farm - Robbie” - I identify my farmer since I monitor multiple farmers - Set Check Period to 24/7 - Set Notification Period to Work Hours (I don’t need to be awoken if my farm goes down) - Assign the advanced service to host NEMS - Set max check attempts: 5 - Set check interval: 10 - Set retry interval: 10 - Set first notification delay: 30 - Set notification interval: 120 - Set notification options: w,u,c,r - Add your Launcher ID to the appropriate field - Save, and generate your NEMS Config
https://docs.nemslinux.com/en/latest/check_commands/check_truepool.html
2022-01-16T19:02:39
CC-MAIN-2022-05
1642320300010.26
[]
docs.nemslinux.com
Links may not function; however, this content may be relevant to outdated versions of the product. Configuring business logic-based routing APIs To process cases quicker, ensure that assignments are routed to the most appropriate workers by using custom APIs for business logic-based routing. You can choose from default APIs or add custom APIs to meet your unique business needs. You can also override APIs to modify lists of available operators and work queues, so that the lists contain only relevant workers. This procedure contains the following tasks: - Adding custom routing options in business logic-based routing - Modifying lists of operators and work queues For more information about routing assignments, see Choosing an assignee at run time in Dev Studio, Choosing an assignee at run time in App Studio.
https://docs.pega.com/configuring-business-logic-based-routing-apis
2022-01-16T20:20:01
CC-MAIN-2022-05
1642320300010.26
[]
docs.pega.com
Packaging an application To migrate a new application to a different environment, you must package the application before you can import it to the new environment. Merging application changes If you develop your application features in separate branches, use the Merge Branches wizard to merge the branches before you package the application. The wizard shows any merge conflicts so that you can correct them before you merge the branches. Additionally, the Pega 1:1 Operations Manager application allows business users to make controlled changes in a Business Operations Environment (BOE), and test the changes in a production environment within boundaries defined by your organization's IT department. For more information, see Revision management. Packaging an application for migration Before you migrate a new application to a different environment, package the relevant data instances and rulesets into a product rule. The product rule is an instance of Rule-Admin-Product, which Pega Platform refers to as the RAP file. - In the header of Dev Studio, click to start the Application Packaging wizard.For more information about using the wizard, see Packaging your application in a product rule. - Complete each page of the Application Packaging wizard. - On the last page of the wizard, click Preview. - Review the contents of the generated RAP file. - If you want to make any changes, on the last page of the wizard, click Modify. - When you have completed your review of the RAP file, click Export. Result: The wizard creates a .zipfile in the ServiceExportdirectory on the current application server node.
https://docs.pega.com/pega-sales-automation-implementation-guide/86/packaging-application
2022-01-16T20:34:04
CC-MAIN-2022-05
1642320300010.26
[]
docs.pega.com
Summary We've acheived a lot during this tutorial, let's review what we have achieved and how PrimeHub helped us as each stage of the process. Part 1 - Label DataPart 1 - Label Data In Part 1, we used Label Studio, installed via PrimeHub Apps, to label a series of screw images. PrimeHub Apps provides easy access and configuration of popular apps such as Code Server, Matlab, Label Studio, MLflow, and Streamlit. These apps allow users to orchestrate data and provide tools to accelerate their machine learning workflow. Part 2 - Train and Tune the ModelPart 2 - Train and Tune the Model In Part 2, we trained the model in Jupyter Notebook, and submitted the notebook as a parameterized job. The results from our jobs were automatically logged into MLflow for experiment tracking. The Submit Notebooks as Job feature lets users configure resources through Instance Type settings, use different images for experimentation, and even use different parameters to tune models. Once connected to PrimeHub, MLflow autologging means that job results can be collected and reviewed easily. Data on parameters, metrics, and artifacts are available for model training history. Part 3 - Manage, Compare, and Deploy the ModelPart 3 - Manage, Compare, and Deploy the Model In Part 3, we compared the results of our jobs and registered the better-performing model to Primehub Models. After customizing a server image we then deployed the model as a web service endpoint. PrimeHub's Model Management stores version-managed models for use in different projects, frameworks, or any designated use. Users can access previously trained models, deploy models, and share well-trained models across teams. Once ready for deployment, the input and output of a model image can be customized and then deployed in a cloud-ready environment using PrimeHub's Model Deployment feature. There's no need for users to provision server resources, PrimeHub takes care of setting up all required resources. Part 4 - Build the Web AppPart 4 - Build the Web App In Part 4, we added a web interface to our endpoint using Streamlit, also installed via PrimeHub Apps. Streamlit is a powerful tool for building web apps with trained models. The web apps made with Streamlit have many uses, such as data feedback, data correction, and model validation etc With PrimeHub, you will experience a wonderful machine learning journey! Enjoy it!
https://docs.primehub.io/docs/primehub-end-to-end-tutorial-summary
2022-01-16T18:50:14
CC-MAIN-2022-05
1642320300010.26
[]
docs.primehub.io
Training Service¶ What is Training Service?¶ NNI training service is designed to allow users to focus on AutoML itself, agnostic to the underlying computing infrastructure where the trials are actually run. When migrating from one cluster to another (e.g., local machine to Kubeflow), users only need to tweak several configurations, and the experiment can be easily scaled. Users can use training service provided by NNI, to run trial jobs on local machine, remote machines, and on clusters like PAI, Kubeflow, AdaptDL, FrameworkController, DLTS, AML and DLC. These are called built-in training services. If the computing resource customers try to use is not listed above, NNI provides interface that allows users to build their own training service easily. Please refer to how to implement training service for details. How to use Training Service?¶ Training service needs to be chosen and configured properly in experiment configuration YAML file. Users could refer to the document of each training service for how to write the configuration. Also, reference provides more details on the specification of the experiment configuration file. Next, users should prepare code directory, which is specified as codeDir in config file. Please note that in non-local mode, the code directory will be uploaded to remote or cluster before the experiment. Therefore, we limit the number of files to 2000 and total size to 300MB. If the code directory contains too many files, users can choose which files and subfolders should be excluded by adding a .nniignore file that works like a .gitignore file. For more details on how to write this file, see this example and the git documentation. In case users intend to use large files in their experiment (like large-scaled datasets) and they are not using local mode, they can either: 1) download the data before each trial launches by putting it into trial command; or 2) use a shared storage that is accessible to worker nodes. Usually, training platforms are equipped with shared storage, and NNI allows users to easily use them. Refer to docs of each built-in training service for details. Built-in Training Services¶ What does Training Service do?¶ According to the architecture shown in Overview, training service (platform) is actually responsible for two events: 1) initiating a new trial; 2) collecting metrics and communicating with NNI core (NNI manager); 3) monitoring trial job status. To demonstrated in detail how training service works, we show the workflow of training service from the very beginning to the moment when first trial succeeds. Step 1. Validate config and prepare the training platform. Training service will first check whether the training platform user specifies is valid (e.g., is there anything wrong with authentication). After that, training service will start to prepare for the experiment by making the code directory ( codeDir) accessible to training platform. Note Different training services have different ways to handle codeDir. For example, local training service directly runs trials in codeDir. Remote training service packs codeDir into a zip and uploads it to each machine. K8S-based training services copy codeDir onto a shared storage, which is either provided by training platform itself, or configured by users in config file. Step 2. Submit the first trial. To initiate a trial, usually (in non-reuse mode), NNI copies another few files (including parameters, launch script and etc.) onto training platform. After that, NNI launches the trial through subprocess, SSH, RESTful API, and etc. Warning The working directory of trial command has exactly the same content as codeDir, but can have different paths (even on different machines) Local mode is the only training service that shares one codeDir across all trials. Other training services copies a codeDir from the shared copy prepared in step 1 and each trial has an independent working directory. We strongly advise users not to rely on the shared behavior in local mode, as it will make your experiments difficult to scale to other training services. Step 3. Collect metrics. NNI then monitors the status of trial, updates the status (e.g., from WAITING to RUNNING, RUNNING to SUCCEEDED) recorded, and also collects the metrics. Currently, most training services are implemented in an “active” way, i.e., training service will call the RESTful API on NNI manager to update the metrics. Note that this usually requires the machine that runs NNI manager to be at least accessible to the worker node. Training Service Under Reuse Mode¶ When reuse mode is enabled, a cluster, such as a remote machine or a computer instance on AML, will launch a long-running environment, so that NNI will submit trials to these environments iteratively, which saves the time to create new jobs. For instance, using OpenPAI training platform under reuse mode can avoid the overhead of pulling docker images, creating containers, and downloading data repeatedly. In the reuse mode, user needs to make sure each trial can run independently in the same job (e.g., avoid loading checkpoints from previous trials).
https://nni.readthedocs.io/en/stable/TrainingService/Overview.html
2022-01-16T18:38:02
CC-MAIN-2022-05
1642320300010.26
[array(['https://user-images.githubusercontent.com/23273522/51816536-ed055580-2301-11e9-8ad8-605a79ee1b9a.png', 'drawing'], dtype=object) ]
nni.readthedocs.io
Test Failover On this page This feature is not available for M0 free clusters, M2, and M5 clusters. To learn more about which features are unavailable, see Atlas M0 (Free Cluster), M2, and M5 Limitations. Replica set elections are necessary every time Atlas makes configuration changes as well as during failure scenarios. Configuration changes may occur as a result of patch updates or scaling events. As a result, you should write your applications to be capable of handling elections without any downtime. MongoDB drivers can automatically retry certain write operations a single time. Retryable writes provide built-in handling of automatic failovers and elections. To learn more, See retryable writes. To enable this feature, add retryWrites=true to your Atlas URI connection string. To learn more, see Connect via Driver. You can use the Atlas UI and API to test the failure of the replica set primary in your Atlas cluster and observe how your application handles a replica set failover. You must have Project Cluster Manager or higher role to test failover. Test Failover Process When you submit a request to test failover using the Atlas UI or API, Atlas simulates a failover event. During this process: - Atlas shuts down the current primary. - The members of the replica set hold an election to choose which of the secondaries will become the new primary. Atlas brings the original primary back to the replica set as a secondary. When the old primary rejoins the replica set, it will sync with the new primary to catch up any writes that occurred during its downtime.Note If the original primary accepted write operations that had not been successfully replicated to the secondaries when the primary stepped down, the primary rolls back those write operations when it re-joins the replica set and begins synchronizing. For more information on rollbacks, see Rollbacks During Replica Set Failover. Contact MongoDB support for assistance with resolving rollbacks. If you are testing failover on a sharded cluster, Atlas triggers an election on all the replica sets in the sharded cluster. Test Failover Using the Atlas UI Log in to the Atlas UI and do the following: - Click Database. - For the cluster you wish to perform failover testing, click on the ... button. - Click Test Failover. Atlas displays a Test Failover modal with the steps Atlas will take to simulate a failover event. - Click Restart Primary to begin the test. See Test Failover Process for information on the failover process. Atlas notifies you in the Test Failover modal the results of your failover process. Test Failover Using the API You can use the Test Failover API endpoint to simulate a failover event. To learn more about the failover process, see Test Failover Process. You can verify that the failover was successful by doing the following: - Log in to the Atlas UI and click Database. - Click the name of the cluster for which you performed the failover test. Observe the following changes in the list of nodes in the Overview tab: - The original PRIMARYnode is now a SECONDARYnode. - A former SECONDARYnode is now the PRIMARYnode. Troubleshoot Failover Issues If your application does not handle the failover gracefully, ensure the following: - The connection string includes all members of the replica set. - You are using the latest version of the driver. - You have implemented appropriate retry logic in your application.
https://docs.atlas.mongodb.com/tutorial/test-failover/
2022-01-16T19:08:57
CC-MAIN-2022-05
1642320300010.26
[array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object)]
docs.atlas.mongodb.com
Migrating Payment Management from FOB to App Continia provides a free migration tool to ensure an easy and safe transfer of your Payment Management data from the FOB versions of Business Central to the app versions (Microsoft Dynamics 365 Business Central version 15 and newer). Note We strongly recommend that the app is installed within a sandbox before installing it in the host machine to ensure that the installation runs as expected. Prerequisites We recommend that you get familiar with the process of upgrading to Business Central before you start migrating data from one system to another. On Microsoft Docs you can find information about upgrading to Microsoft Dynamics 365 Business Central and the upgrade considerations. Complete the following actions before using the Payment Management migration tool: Post open payment entries. Make sure everything is posted in the cash receipt journal, payment journal, and bank account reconciliation. Export data from Microsoft Dynamics NAV. Move all standard data before moving the Payment Management data. Microsoft Dynamics 365 has developed the Cloud Migration Tool for this purpose, which is recommended to use for the data export. The Continia migration tool can only be used in the process of transferring Payment Management data. Upgrade to Microsoft Dynamics 365 Business Central. Set up a Business Central sandbox environment. The upgrade process depends on your current Microsoft Dynamics NAV version. Microsoft Dynamics 365 has made an overview to on-premise and online that guides you through the upgrade process. Install and activate Payment Management. Install Payment Management in the sandbox environment before moving the Payment Management data. The assisted Payment Management setup guides must be completed. Note When activating Payment Management on-premises you will be requested to fill in the customer's client credentials. If you haven't previously received the requested client credentials, please contact the Continia administration, by sending an email to [email protected] containing the customer's Voice ID and your (Microsoft Partner's) email address, or call +45 8230 5000. Download the Continia Payment Management migration tool. Before migrating the Payment Management data, the migration tool must be downloaded from Continia PartnerZone. On the PartnerZone webpage, navigate to the Download Center, then filter by Payment Management 365 and use the tag Misc to find the migration tool. When the migration tool has been downloaded you can start the migration process. To export Payment Management data from NAV - Open Microsoft Dynamics NAV Development. - Navigate to Tools > Object Designer. - In the Object Designer, choose File > Import. - In the Migration package downloaded from PartnerZone, select the fob file from the folder matching the current NAV/BC version. - Run the page 51666. - On the page Perform Data Migration PM365 fill in the following information: - Scope: Select Setup. It is important that you do not select ALL or History. - Company: Choose the given legal entity. If you do not chose a legal entity, all legal entities will be included in the file. - Choose folder for export: Select to which folder the file should be exported. - In the action bar, select Create migration data. - In the action bar, select Export Migration data. To import the migration file to Business Central - Open your Business Central sandbox environment. - Install Payment Management. For more information about installing the online version of Payment Management see the article Installing the Payment Management app. - Upload the migration app: - Cloud: Before migration, be aware that it might be necessary to renumber the objects in the migration app before it can be installed in Business Central. This is necessary if the object numbers are already used by another 3rd party product. The ID of the objects must be renumbered so that they are within the range 50000..99999. We recommend that you do this in Microsoft Visual Studio. Once the app is generated, it loads in Business Central. - On-premises: Find the app located in Continia's Migration Package - When the installation is complete use the icon and search for Payment Management 365 Migration, then select the related link. - On the migration page, specify which data you want to import into Business Central. You can import data several times to make the import more manageable. - In the action bar select Import Migration File. Data selected for import will now be imported from the file into a new page from where the data can be verified before it is finally imported into Business Central. - Review and verify the imported data. - Once the data has been reviewed and verified navigate to the action bar and select Approve Import Suggestion. Payment Management data will now be imported into the Payment Management solution in Business Central. We recommend that you verify all data and settings before completing the migration in your production environment.
https://docs.continia.com/en-us/continia-payment-management/development-and-administration/migrating-from-fob-to-business-central
2022-01-16T19:25:43
CC-MAIN-2022-05
1642320300010.26
[]
docs.continia.com
. OpenShift Container Platform Architecture: Additional Concepts → Storage OpenShift Container Platform Installation and Configuration: Configuring Persistent Storage → Volume Security For shared storage providers like NFS, Ceph, and Gluster,. OpenShift Container Platform Installation and Configuration. OpenShift Container Platform Installation and Configuration
https://docs.openshift.com/container-platform/3.9/security/storage.html
2022-01-16T18:53:22
CC-MAIN-2022-05
1642320300010.26
[]
docs.openshift.com
Introduction to arkdb Carl Boettiger 2022-01-14Source: vignettes/arkdb.Rmd arkdb.Rmd arkdb Package rationale Increasing (see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once. This will be slower than reading the file into memory at one go, but can be scaled to larger data and larger data with no additional memory requirement. Tutorial library(arkdb) # additional libraries just for this demo library(dbplyr) library(dplyr) library(nycflights13) library(fs) Creating an archive of an existing database First, we’ll need an example database to work with. Conveniently, there is a nice example using the NYC flights data built into the dbplyr package. tmp <- tempdir() # Or can be your working directory, "." db <- dbplyr::nycflights13_sqlite(tmp) #> Caching nycflights db at /tmp/RtmpamWWdc5273819 secs) #> Exporting airports in 50000 line chunks: #> ...Done! (in 0.01683855 secs) #> Exporting flights in 50000 line chunks: #> ...Done! (in 8.348673 secs) #> Exporting planes in 50000 line chunks: #> ...Done! (in 0.02331734 secs) #> Exporting weather in 50000 line chunks: #> ...Done! (in 0.5932264 × 2 #> path size #> <chr> Unarchive <- DBI::dbConnect(RSQLite::SQLite(), fs::path(tmp, "local.sqlite")) As with ark, we can set the chunk size to control the memory footprint required: unark(files, new_db, lines = 50000) #> Importing /tmp/RtmpamWWdc/nycflights/airlines.tsv.bz2 in 50000 line chunks: #> ...Done! (in 0.01112223 secs) #> Importing /tmp/RtmpamWWdc/nycflights/airports.tsv.bz2 in 50000 line chunks: #> ...Done! (in 0.01941586 secs) #> Importing /tmp/RtmpamWWdc/nycflights/flights.tsv.bz2 in 50000 line chunks: #> ...Done! (in 5.942757 secs) #> Importing /tmp/RtmpamWWdc/nycflights/planes.tsv.bz2 in 50000 line chunks: #> ...Done! (in 0.02831483 secs) #> Importing /tmp/RtmpamWWdc/nycflights/weather.tsv.bz2 in 50000 line chunks: #> ...Done! (in 0.2223248 secs) unark returns a dplyr database connection that we can use in the usual way: tbl(new_db, "flights") #> # Source: table<flights> [?? x 19] #> # Database: sqlite 3.37.0 [/tmp/RtmpamWWdc/local.sqlite] #> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time #> <int> <int> <int> <int> <int> <int> more rows, and 11 more variables: arr_delay <int>, carrier <chr>, #> # flight <int>, tailnum <chr>, origin <chr>, dest <chr>, air_time <int>, #> # distance <int>, hour <int>, minute <int>, time_hour <dbl> # Remove example files we created. DBI::dbDisconnect(new_db) unlink(dir, TRUE) unlink(fs::path(tmp, "local.sqlite")) Pluggable text formats2511501 secs) #> Exporting airports in 50000 line chunks: #> ...Done! (in 0.01621008 secs) #> Exporting flights in 50000 line chunks: #> ...Done! (in 8.703419 secs) #> Exporting planes in 50000 line chunks: #> ...Done! (in 0.0253036 secs) #> Exporting weather in 50000 line chunks: #> ...Done! (in 0.59834 secs) files <- fs::dir_ls(dir, glob = "*.csv.bz2") new_db <- DBI::dbConnect(RSQLite::SQLite(), fs::path(tmp, "local.sqlite")) unark(files, new_db, streamable_table = streamable_base_csv()) #> Importing /tmp/RtmpamWWdc/nycflights/airlines.csv.bz2 in 50000 line chunks: #> ...Done! (in 0.009771824 secs) #> Importing /tmp/RtmpamWWdc/nycflights/airports.csv.bz2 in 50000 line chunks: #> ...Done! (in 0.01860404 secs) #> Importing /tmp/RtmpamWWdc/nycflights/flights.csv.bz2 in 50000 line chunks: #> ...Done! (in 5.866602 secs) #> Importing /tmp/RtmpamWWdc/nycflights/planes.csv.bz2 in 50000 line chunks: #> ...Done! (in 0.02670312 secs) #> Importing /tmp/RtmpamWWdc/nycflights/weather.csv.bz2 in 50000 line chunks: #> ...Done! (in 0.1778193: #> Warning: The `path` argument of `write_tsv()` is deprecated as of readr 1.4.0. #> Please use the `file` argument instead. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated. #> ...Done! (in 0.1316538 secs) #> Exporting airports in 50000 line chunks: #> ...Done! (in 0.02157211 secs) #> Exporting flights in 50000 line chunks: #> ...Done! (in 4.207392 secs) #> Exporting planes in 50000 line chunks: #> ...Done! (in 0.02683759 secs) #> Exporting weather in 50000 line chunks: #> ...Done! (in 0.2608745. A note on compression). Distributing data.
https://docs.ropensci.org/arkdb/articles/arkdb.html
2022-01-16T19:03:31
CC-MAIN-2022-05
1642320300010.26
[]
docs.ropensci.org
Create One Private Endpoint for One Provider On this page - Prerequisites - Create a private endpoint service. - Create the private endpoint. - Create a private endpoint service. - Disable the private endpoint policies. - Create the private endpoint. - Create a private endpoint service. - Create the private endpoints in Google Cloud. - Required Roles - Request - Request Path Parameters - Request Query Parameters - Request Body Parameters - Response Elements - Example Request - Example Response Groups and projects are synonymous terms. Your {GROUP-ID} is the same as your project ID. For existing groups, your group/project ID remains the same. The resource and corresponding endpoints use the term groups. Create one of the following: - One private endpoint for AWS or Azure in an Atlas project. - One private endpoint group for Google Cloud in an Atlas project. Endpoint groups represent a collection of endpoints. To learn more, see Set up a Private Endpoint. If the attempt to add an endpoint or endpoint group fails, delete it, then try to add a new one.. Prerequisites You must complete the following steps for your cloud provider before you can create a private endpoint or endpoint group: Required Roles You must have at the Project Owner role for the project to successfully call this resource. Request Request Path Parameters Request Query Parameters This endpoint might use any of the HTTP request query parameters available to all Atlas Administration API resources. All of these are optional.
https://docs.atlas.mongodb.com/reference/api/private-endpoints-endpoint-create-one/
2022-01-16T19:41:27
CC-MAIN-2022-05
1642320300010.26
[array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object) array(['/assets/link.png', 'icons/link.png'], dtype=object)]
docs.atlas.mongodb.com
Managing custom domain names for an App Runner service When you create an AWS App Runner service, App Runner allocates a domain name for it. This is a subdomain in the awsapprunner.com domain that's owned by App Runner. It can be used to access the web application that's running in your service. If you own a domain name, you can associate it to your App Runner service. After App Runner validates your new domain, it can be used to access your application in addition to the App Runner domain. You can associate up to five custom domains. You can optionally include the www subdomain of your domain. However, this is currently only supported in the API. The App Runner console doesn't support it. When you associate a custom domain with your service, App Runner provides you with a set of CNAME records to add to your Domain Name System (DNS). Add certificate validation records to your DNS so that App Runner can validate that you own or control the domain. In addition, add DNS target records to your DNS to target the App Runner domain. You need to add one record for the custom domain, and another for the www subdomain, if you chose this option. Then wait for the custom domain status to become Active in the App Runner console. This typically takes several minutes (but might take 24-48 hours). At this point, your custom domain is validated, and App Runner starts routing traffic from this domain to your web application. You can specify a domain to associate with your App Runner service in the following ways: A root domain – For example, example.com. You can optionally associate as part of the same operation. A subdomain – For example, login.example.comor admin.login.example.com. You can optionally associate the wwwsubdomain too as part of the same operation. A wildcard – For example, *.example.com. You can't use the wwwoption in this case. You can specify a wildcard only as the immediate subdomain of a root domain, and only on its own (these aren't valid specifications: login*.example.com, *.login.example.com). This wildcard specification associates all immediate subdomains, and doesn't associate the root domain itself (the root domain would have to be associated in a separate operation). A more specific domain association overrides a less specific one. For example, login.example.com overrides *.example.com. The certificate and CNAME of the more specific association are used. The following example shows how you can use multiple custom domain associations: Associate example.comwith the home page of your service. Enable the wwwto also associate. Associate login.example.comwith the login page of your service. Associate *.example.comwith a custom "not found" page. You can disassociate (unlink) a custom domain from your App Runner service. When you unlink a domain, App Runner stops routing traffic from this domain to your web application. You must delete the records for this domain from your DNS. App Runner internally creates certificates that track domain validity. They're stored in AWS Certificate Manager (ACM). App Runner doesn't delete these certificates for seven days after a domain is disassociated from your service or after the service is deleted. Manage custom domains Manage custom domains for your App Runner service using one of the following methods: - App Runner console To associate (link) a custom domain using the App Runner console Open the App Runner console , and in the Regions list, select your AWS Region. In the navigation pane, choose Services, and then choose your App Runner service. The console displays the service dashboard with a Service overview. On the service dashboard page, choose the Custom domains tab. The console shows the custom domains that are associated with your service, or No custom domains. On the Custom domains tab, choose Link domain. In the Link custom domain dialog, enter a domain name, and then choose Link custom domain. Follow the instructions on the Configure DNS page to start the domain validation process. Choose Close The console shows the dashboard again. The Custom domains tab has a new tile showing the domain you just linked in the Pending certificate DNS validation status. When the domain status changes to Active, verify that the domain works for routing traffic by browsing to it. To disassociate (unlink) a custom domain using the App Runner console On the Custom domains tab, select the tile for the domain you want to disassociate, and then choose Unlink domain. In the Unlink domain dialog, verify the action by choosing Unlink domain. - App Runner API or AWS CLI To associate a custom domain with your service using the App Runner API or AWS CLI, call the AssociateCustomDomain API action. When the call succeeds, it returns a CustomDomain object that describes the custom domain that's being associated with your service. The object should show a status of CREATING, and contains a list of CertificateValidationRecord objects. These are records you can add to your DNS. To disassociate a custom domain from your service using the App Runner API or AWS CLI, call the DisassociateCustomDomain API action. When the call succeeds, it returns a CustomDomain object that describes the custom domain that's being disassociated from your service. The object should show a status of DELETING.
https://docs.aws.amazon.com/apprunner/latest/dg/manage-custom-domains.html
2022-01-16T20:31:28
CC-MAIN-2022-05
1642320300010.26
[]
docs.aws.amazon.com
public interface GenericConverter Generic converter interface for converting between two or more types. influence the conversion logic. This interface should generally not be used when the simpler Converter or ConverterFactory interfaces are sufficient. TypeDescriptor, Converter, ConverterFactory Set<GenericConverter.ConvertiblePair> getConvertibleTypes() Each entry is a convertible source-to-target type pair. Object convert(Object source, TypeDescriptor sourceType, TypeDescriptor targetType) source- the source object to convert (may be null) sourceType- the type descriptor of the field we are converting from targetType- the type descriptor of the field we are converting to
https://docs.spring.io/spring-framework/docs/3.0.x/javadoc-api/org/springframework/core/convert/converter/GenericConverter.html?is-external=true
2022-01-16T20:12:23
CC-MAIN-2022-05
1642320300010.26
[]
docs.spring.io
nisd.largegroup.suffix This configuration parameter specifies the suffix string or character to use in group names when automatically splitting up a group with large number of members. Because group.bygid and group.byname NIS maps can often contain membership lists that exceed the 1024 limit for how much NIS data can be served to clients, the adnisd process will automatically truncate the membership list when this limit is reached. To allow the additional membership data to be retrieved, you can configure the Centrify Network Information Service to automatically split a large group into as many new groups as needed to deliver the complete membership list. If you specify any value for the nisd.largegroup.suffix parameter, you enable the adnisd process to automatically split a large group into multiple new groups. When a group’s data size exceeds 1024 data limit, a new group is created. The new group name is formed using the original group name, followed by the string defined for the nisd.largegroup.suffix parameter and ending in a number that represents the numeric order of the new group created. For example, if you have a large group named performix-worldwide-corp, and have defined the suffix string as “-all” and the maximum length for group names as 10, when the performix-worldwide-corp group membership is split into multiple groups, the groups are named as follows: performix-worldwide-corp-all1 performix-worldwide-corp-all2 performix-worldwide-corp-all3 performix-worldwide-corp-all4 All of the new groups have the same group identifier (GID) as the original group. If the new group names would exceed the maximum length for group names on a platform, you can use the nisd.largegroup.name.length parameter to set the maximum length for the new groups created. If this configuration parameter is not set, the adnisd process truncates the group membership list such that each group entry is under 1024 characters.
https://docs.centrify.com/Content/config-unix/nisd_largegroup_suffix.htm
2022-01-16T18:17:46
CC-MAIN-2022-05
1642320300010.26
[]
docs.centrify.com
Documents On this page MongoDB stores data records as BSON documents. BSON is a binary representation of JSON documents, though it contains more data types than JSON. For the BSON spec, see bsonspec.org. See also BSON Types. Document Structure Field names are strings. Documents have the following restrictions on field names: - The field name _idis reserved for use as a primary key; its value must be unique in the collection, is immutable, and may be of any type other than an array. If the _idcontains subfields, the subfield names cannot begin with a ( $) symbol. - Field names cannot contain the nullcharacter. - The server permits storage of field names that contain dots ( .) and dollar signs ( $). - MongodB 5.0 adds improved support for the use of ( $) and ( .) in field names. There are some restrictions. See Field Name Considerations for more details. - MongoDB 2.6 through MongoDB versions with featureCompatibilityVersion (fCV) set to "4.0"or earlier - For indexed collections, the values for the indexed fields have a Maximum Index Key Length. See Maximum Index Key Length for details. Dot Notation MongoDB uses the dot notation to access the elements of an array and to access the fields of an embedded document.: $[]all positional operator for update operations, $[<identifier>]filtered positional operator for update operations, $positional operator for update operations, $projection operator when array index position is unknown - Query an Array for dot notation examples with arrays. Embedded Documents: Document Limitations Documents have the following attributes: Document Size Limit Unlike JavaScript objects, the fields in a BSON document are ordered. Field Order in Queries For queries, the field order behavior is as follows: When comparing documents, field ordering is significant. For example, when comparing documents with fields aand bin a query: {a: 1, b: 1}is equal to {a: 1, b: 1} {a: 1, b: 1}is not equal to {b: 1, a: 1} For efficient query execution, the query engine may reorder fields during query processing. Among other cases, reordering fields may occur when processing these projection operators: $project, $addFields, $set, and $unset. - Field reordering may occur in intermediate results as well as the final results returned by a query. - Because some operations may reorder fields, you should not rely on specific field ordering in the results returned by a query that uses the projection operators listed earlier. Field Order in Write Operations For write operations, MongoDB preserves the order of the document fields except for the following cases:. - _ If the _idcontains subfields, the subfield names cannot begin - with a ( $) symbol. The _idfield may contain values of any BSON data type, other than an array, regex, or undefined.Warning To ensure functioning replication, do not store values that are of the BSON regular expression type in the _idfield. The following are common options for storing values for _id: - Use an ObjectId. - In addition to defining data records, MongoDB uses the document structure throughout, including but not limited to: query filters, update specifications documents, and index specification documents Query Filter Documents Query filter documents specify the conditions that determine which records to select for read, update, and delete operations. You can use <field>:<value> expressions to specify the equality condition and query operator expressions. For examples, see: Update Specification Documents Update specification documents use update operators to specify the data modifications to perform on specific fields during an update operation. For examples, see Update specifications. Index Specification Documents Index specification documents define the field to index and the index type: Further Reading For more information on the MongoDB document model, download the MongoDB Application Modernization Guide. The download includes the following resources:
https://docs.mongodb.com/v5.0/core/document/
2022-01-16T18:15:11
CC-MAIN-2022-05
1642320300010.26
[array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object) array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)]
docs.mongodb.com
Updating Ombi¶ Automatic Updates¶ Note: The built-in automatic updater is broken for 'local' installations. The developer is aware of this, as is the support team. Current development is focused on a UI rewrite - once a viable cross-platform update method has been found, it will be implemented as a fix. Automated container updates via something like WatchTower for docker installs are unaffected - only direct installs using apt/exe deployment. If you have a suggestion for an update solution, feel free to either fork the project and submit a pull request, or submit a suggestion over on Discord. Watchtower (Docker)¶ There is an option in docker to use something called 'Watchtower' to automatically update containers/images. If going this route we strongly suggest using a few extra arguments for both the Ombi container and the watchtower one. For the Ombi container, add a label to the container named com.centurylinklabs.watchtower.enable. Set it to true. For the Watchtower one, add a label to the container named WATCHTOWER_LABEL_ENABLE. Set it to true. Use Script (semi-automatic updates)¶ You can use your own update script here, please note that this will have to manage the termination and start of the Ombi process. You will have to terminate Ombi yourself. - carnivorouz - v4 Linux systemd script Script Path¶ The path to your script, we will automatically pass it the following arguments in the following order: YourScript {UpdateDownloadUrl} --applicationPath {CurrentInstallLocation} --processname {ProcessName} --host {Ombi Host Argument} --storage {Ombi StoragePath Argument} e.g. Update.sh --applicationPath /opt/ombi --processname ombi --host http://*:5000 This means the variables will be: {UpdateDownloadUrl}: $1 {CurrentInstallLocation}: $3 {ProcessName}: $5 {Ombi Host Argument}: $7 {Ombi StoragePath Argument}: $9 The {UpdateDownloadUrl} is the Download that will contain either the .zip or .tar.gz file. {Ombi Host Argument} and {Ombi StoragePath Argument} are the args that you may have passed into Ombi e.g. Ombi Host Argument could be http://*:5000 (They are optional) Manual Updates¶ It is possible to update Ombi manually. To do so is fairly straightforward. - Stop Ombi. You can't do anything to it if the program is running. - If you're running Ombi as a service, stop the service. - If you're running Ombi manually, kill the process. - Back up the database info from the Ombi directory. - Delete the contents of the Ombi directory, excluding the files mentioned in step 2. - Download the latest windows.zipfrom the link below: Stable - Extract the zip to your Ombi directory. - Start Ombi again. External Script (windows)¶ Windows users who are running Ombi as a service can make use of a powershell script to update their Ombi instance. This script can be scheduled in task scheduler to run daily (or hourly), and it will check the current version of your Ombi instance against the latest release. Do not put it into the same folder as Ombi itself, as the script cleans out that folder and will have issues if it deletes itself. This only works for develop releases, and is very beta. Do not use unless you know what you are doing with powershell. You can download the script from here. If you would prefer a pre-compiled executable file that can be scheduled in task scheduler, you can download that here. You will need to pass parameters to the script when calling it for it to work, and it will need to be run as an administrator. Parameters are: - ApiKey This should be your API key for Ombi (found in your web interface). This is required. - Ombidir This is the folder your copy of Ombi is running from. This is required. - OmbiURL The address Ombi is listening on. This is required if you are using a non-standard port, IP, or baseurl. Defaults to - UpdaterPath This is where the script will download to. It's only required if you don't want them put in your downloads folder, as it defaults to a folder in your downloads folder. - ServiceName Most of us just use 'Ombi', so it's the default. If you used something different, pass in this parameter with whatever you used. - Filename This is only for if you are using x86. If this is the case, pass in Win10-x86.zipas the parameter. Default is Win10-x64.zip. - Force This is a simple true/false switch - it will force the script to install the newest version, even if it's already installed. If the parameter isn't there, it's a false. The moment you add -Forceto the end of the command you'd normally use to run this script, it'll be trueand force a reinstall. To pass parameters to a powershell script, you name them when calling the script as such: script -parametername 'parametervalue' -parameter2name 'parameter2value
https://docs.ombi.app/guides/updating/
2022-01-16T20:00:37
CC-MAIN-2022-05
1642320300010.26
[]
docs.ombi.app
Parsing Personal Names If you have name data that is all in one field, you may want to parse the name into separate fields for each part of the name, such as first name, last name, title of respect, and so on. These parsed name elements can then be used by other automated operations such as name matching, name standardization, or multi-record name consolidation. - sink stage onto the canvas and connect Open Name Parser to it. For example, if you are using a Write to File sink, your dataflow might look like this: - Double-click the sink stage and configure it. See the Dataflow Designer's Guide for instructions on configuring source stages. You have created a dataflow that can parse personal names into component parts, placing each part of the name in its own field.
https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/DataQualityGuide/source/ParsingPersonalNames.html
2022-01-16T18:37:01
CC-MAIN-2022-05
1642320300010.26
[]
docs.precisely.com
License Comparison In addition to feature differences among tiers, some features are also varied in licenses. - EE Trial: Enterprise Edition with Default License. - EE Licensed: Enterprise Edition with Authorized License. - Deploy Trial: Deploy Edition with Default License. - Deploy Licensed: Deploy Edition with Authorized License. Here we list the features comparison among licenses. FAQFAQ Q: What is the license detail? - For each license, we determine the: - # of nodes allowed - # of groups allowed - # of deployments allowed - License Start Time - License Expiration Time Q: What is the default license? - When we install a PrimeHub EE or PrimeHub Deploy, there is a default license installed. - The detail of default license: - 1 node - 1 group - 1 deployment - The license is never expired Q: What happens if the license is expired? - The normal user cannot create new resources for Jobs, Schedules, and Deployments. - The existing resources would not be affected. Q: What happens if the # of node exceeds the limitation in license? - The console would show a warning message: You are using more nodes than your license allows. Please contact your system administrator. Q: What happens if I upgrade from CE to EE, and the # of groups exceeds the limitation in license? - The existing groups would not be affected, but it is not allowed to create new group.
https://docs.primehub.io/docs/license-comparison
2022-01-16T19:56:55
CC-MAIN-2022-05
1642320300010.26
[]
docs.primehub.io
Date: Sun, 16 Jan 2022 18:35:54 +0000 (GMT) Message-ID: <702502638.105176.1642358154367@9c5033e110b2> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_105175_1051088238.1642358154367" ------=_Part_105175_1051088238.1642358154367 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Functions: Source: The following set of arrays contain results, in order, of a series of ra= ces. From this list, the goal is to generate the score for each racer accor= ding. NOTE: If ARRAYINDEXOF and ARRAYRIGHTINDEXOF do not retu= rn the same value for the same inputs, then the value is not unique in the = array. ev= aluate for the Did Not Finish (DNF) and last place conditions: You can use the following to determine if the specified racer was last i= n the event: Results: =20
https://docs.trifacta.com/exportword?pageId=136160948
2022-01-16T18:35:54
CC-MAIN-2022-05
1642320300010.26
[]
docs.trifacta.com
... Account Roles A member can have one of the following roles: Member role The Member role enables the user to access all product functionality that is enabled for the product edition. Admin role The Admin role enables all capabilities of the Member role, plus: ... ... Resource Roles and Privileges ... ... Resource Roles and Privileges Access tois governed by roles in the user account. - A A roleis is a set of zero or more privileges. - A privilege is an access level for a type of object. A A user may have one or more assigned roles in it. A privilege is an access level for a type of object. A role may have one or more privileges assigned to it. - All accounts are created with the defaultrole role, which provides a set of basic privileges. Standard roles default role All new users are automatically assigned the default role role. By By default, the default role enables full access to all types of of - If you have upgraded from a version of the product that did not support authorization, the defaultrole role represents no change in behavior. All existing users can access access as normal. Since roles in a user account are additive, you may choose to reduce the privileges on the default role role and then add privileges selectively by creating other roles and assigning them to users. See the example below. ... Workspace admin role ... - access to all objects, unless specifically limited. See Resource Roles and Privileges below. - administration functions and settings within the . Custom role(s) As needed, administrators can create custom roles for users of the project or workspace. For more information, see Create Role. Privileges For a complete list of privileges for each type of object, see Privileges and Roles Reference. ... In the following model, three separate roles have been created. Each role enables the highest level of access to a specific type of object. The default object object has been modified: - Since all users are automatically granted the defaultrole role, the scope of its permissions has been reduced here to view-only. - There is no viewerprivilege privilege for Plans ( none, author). ... ... - User can create, schedule, modify, run jobs, and delete flows (full privileges). - User can create, modify, and delete connections (full privileges). User can create, schedule, modify, run jobs, and delete plans (full privileges).
https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=160408355&selectedPageVersions=6&selectedPageVersions=5
2022-01-16T19:51:57
CC-MAIN-2022-05
1642320300010.26
[]
docs.trifacta.com
GitLab1. Log in to GitLab 2. Settings2. Settings Click on Settings from the dropdown menu 3. Account3. Account Under User Settings click on Account. 4. Two-Factor Authentication4. Two-Factor Authentication To the right the Two-Factor Authentication status is set as Disabled. Click on the Enable two-factor authentication Register with Two-factor app. You will also be prompted to save backup codes for account access should you not have access to the app. Make sure to store them somewhere safe. The status should now show that Two-factor authentication is Enabled. The next time you log in to GitLab and are prompted for a One-time passcode, you can use the Trusona app to log in.
https://docs.trusona.com/totp/gitlab/
2022-01-16T18:15:46
CC-MAIN-2022-05
1642320300010.26
[array(['https://docs.trusona.com/images/totp-integration-images/gitlab/step-2.png', 'Click on Settings'], dtype=object) array(['https://docs.trusona.com/images/totp-integration-images/gitlab/step-3.png', 'Click on Account'], dtype=object) array(['https://docs.trusona.com/images/totp-integration-images/gitlab/step-4.png', 'Enable Two-factor authentication'], dtype=object) array(['https://docs.trusona.com/images/totp-integration-images/gitlab/step-5.png', 'Scanning the code'], dtype=object) array(['https://docs.trusona.com/images/totp-integration-images/gitlab/step-6.png', 'Finalize'], dtype=object) ]
docs.trusona.com
Risk Assessment Process Intro A risk assessment should be completed at least annually and is core to the information security process. The risk assessment framework is a tool to help identify the material weaknesses and prioritize areas for improvement. A risk assessment is also a great tool for demonstrating your awareness of the scenarios that could result is loss of confidentiality, integrity, and availability or sensitive data. If these scenarios are well documented, assessed, and mitigated it becomes easier to build trust with stakeholders. Risk can live is all areas of the business, so the risk assessment should consider many topics including people, processes and technology. Scope A strong risk assessment will have a clear focus. Consider the scope of the ISMS and make sure the risk assessment is focused on the people processes, and technologies that are covered by the ISMS. Participants At a minimum, the security team should be involved in the annual risk assessment. Other teams that often have an ideas in terms of risk identification, assessment, and mitigation are: - Engineering - Site Reliability - Customer Support - People Operations - Legal - Operations Pre-Work Participating individuals should be sent a communication 1-2 weeks prior to the risk assessment, encouraging them to consider the top information security risks that the company faces at this point in time. Questions to consider asking that will help to individually answer 50-60 questions in one of the following calibration tests. (Research suggests that calibrating your ability to forecast prior to forecasting can help level results across participants.) Risk Identification A strong risk assessment will identify risks from across the business that could impact the confidentiality, integrity and availability of sensitive data. Aptible has a staring template of risks that can be made available upon request. The more specific a risk statement, the targeted the mitigation and response can become. You can consider facilitating a session and asking questions below to help document additional risks: - What could security risks could impact our ability to achieve our objectives as a company? - What events would lead to loss of confidentiality of critical data or services? - What events would lead to loss of integrity of critical data or services? - What events would lead to loss of availability of critical data or services? - What tools, technologies, and vendors do we rely on that my introduce risk? - What changes in the external environment may impact the organization? - What keeps you up at night as it relates to an internal process or procedure? Risk Assessment When it comes time to assess each risk, this article will help with that mechanical steps in the Comply application. The assessment of risks allows for objectively prioritizing which risks need action taken. Risk Response For each risk, a response show be chosen: - Accept the risk by not acting; - Mitigate the risk by implementing a security control to lessen the impact of the threat event or the likelihood of the threat event occurring; - Avoid the risk by acting in a way that prevents the threat from occurring (e.g., for the threat event that a phishing attack is successful because 2FA is disabled, we might avoid the risk by requiring use of 2FA); - Transfer the risk that the threat occurs to another party (e.g., if one threat is related to us creating our own internal messaging system, we might instead purchase the use of a third-party messaging system); or - Share the risk by transferring a portion of the risk to a third party. If a risk will be accepted, it is import to log a rationale for that in the notes field outlining why the risk is being accepted. Risks that are being mitigated should be linked to the corresponding policies and controls in Comply under the Control Responses column. Risk Treatment The risk assessment is a great tool for prioritizing security projects and risk treatments to take on over the course of the next year. If a risk is above an acceptable level, or there are known weaknesses with the controls, use the "Open Ticket" function to track a risk treatment. The risk treatment should have clear ownership and due dates so you can clearly demonstrate commitment to improving your security program over time. Updated over 1 year ago Get an overview of the risk tool within Aptible Comply
https://docs.aptible.com/docs/risk-assessment-process
2022-01-16T18:28:33
CC-MAIN-2022-05
1642320300010.26
[array(['https://files.readme.io/7bb64bb-Screen_Shot_2020-06-30_at_2.53.38_PM.png', 'Screen Shot 2020-06-30 at 2.53.38 PM.png'], dtype=object) array(['https://files.readme.io/7bb64bb-Screen_Shot_2020-06-30_at_2.53.38_PM.png', 'Click to close...'], dtype=object) ]
docs.aptible.com
Searching data / Building a query / Operations reference / Geolocation group / Geolocated Coordinates (mmcoordinates) Geolocated Coordinates (mmcoordinates) Description Geolocates an IPv4 address and returns its coordinates. This operation returns data for public IP addresses. If an IP is private, it will return null. Use the Geolocated coordinates with MaxMind GeoIP2 (mm2coordinates) operation if you want to get the coordinates of an IPv6 address (ip6 data type). How does it work in the search window? This operation needs only one argument: The data type of the new column is geocoord. Example We want to get the coordinates corresponding to the IP addresses in ourclientIpAddress column, so we click Create column and select the Geolocated coordinates operation. Select clientIpAddress as the argument and assign a name to the new column - let's call it coordinates. You will get the following result: How does it work in LINQ? Use the operator as... and add the operation syntax to create the new column. The syntax is as follows: mmcoordinates(ip) Example Copy the following LINQ script and try the above example on the demo.ecommerce.data table. from demo.ecommerce.data select mmcoordinates(clientIpAddress) as coordinates
https://docs.devo.com/confluence/ndt/v7.0.8/searching-data/building-a-query/operations-reference/geolocation-group/geolocated-coordinates-mmcoordinates
2022-01-16T19:15:51
CC-MAIN-2022-05
1642320300010.26
[]
docs.devo.com
New Relic offers an integration for reporting your AWS IoT metric data and inventory data. from automatically captured metric data. - Set alert conditions on your AWS IoT integration data directly from the New Relic Integrations page. Activate integration To enable this integration follow standard procedures to Connect AWS services to New Relic. Configuration and polling By default, New Relic queries your AWS IoT services every 5 minutes. If you want New Relic to query your services less often, you can change the polling frequency. Explore integration data After connecting the AWS IoT integration to New Relic and waiting a few minutes, you can use integration data: Metric data To view metric data for your AWS IoT integration, create NRQL queries for IOTBrokerSample, IOTRuleActionSample, and IOTRuleSample events and their related attributes. For more information on AWS IoT metrics and dimensions, see the AWS IoT Developer Guide. Inventory data To view inventory data for AWS IoT, go to one.newrelic.com > Infrastructure > Inventory and search for or select the following:
https://docs.newrelic.com/docs/infrastructure/amazon-integrations/aws-integrations-list/aws-iot-monitoring-integration
2022-01-16T18:53:06
CC-MAIN-2022-05
1642320300010.26
[]
docs.newrelic.com
Grammars A valid parsing grammar contains: - A root variable that defines the sequence of tokens, or domain pattern, as rule variables. - Rule variables that define the valid set of characters and the sequence in which those characters can occur in order to be considered a member of a domain pattern. For more information, see Rule Section Commands. - The input field to parse. Input field designates the field to parse in the source data records. - The output fields for the resulting parsed data. Output fields define where to store each resulting token that is parsed. - Characters used to tokenize the input data that you are parsing. Tokenizing characters are characters, like space and hyphen, that determine the start and end of a token. The default tokenization character is a space. Tokenizing characters are the primary way that a sequence of characters is broken down into a set of tokens. You can set the tokenize command to NONE to stop the field from being tokenized. When tokenize is set to None, the grammar rules must include any spaces within its rule definition. - Casing sensitivity options for tokens in the input data. - Join character for delimiting matching tokens. - Matching tokens in tables - Matching compound tokens in tables - Defining RegEx tags - Literal strings in quotes - Expression Quantifiers (optional). For more information about expression quantifiers, see Rule Section Commands and Expression Quantifiers: Greedy, Reluctant, and Possessive Behavior. - Other miscellaneous indicators for grouping, commenting, and assignment (optional). For more information about grouped expressions, see Grouping Operator( ). The rule variables in your parsing grammar form a layered tree structure of the sequence of characters or tokens in a domain pattern. For example, you can create a parsing grammar that defines a domain pattern based on name input data that contains the tokens <MiddleName>, and Using the input data: Joseph Arnold Cowers You can represent that data string as three tokens in a domain pattern: <root> = <FirstName><MiddleName><LastName>; The rule variables for this domain pattern are: <FirstName> = <given>; <MiddleName> = <given>; <LastName> = @Table("Family Names"); <given> = @RegEx("[A-Za-z]+"); Based on this simple grammar example, Open Parser tokenizes on spaces and interprets the token Joseph as a first name because the characters in the first token match the [A-Za- z]+ definition and the token is in the defined sequence. Optionally, any expression may be followed by another expression. Example <variable> = "some leading string" <variable2>; <variable2> = @Table ("given") @RegEx("[0-9]+"); A grammar rule is a grammatical statement wherein a variable is equal to one or more expressions. Each grammar rule follows the form: < rule> = expression [| expression...]; Grammar rules must follow these rules: <root>is a special variable name and is the first rule executed in the grammar because it defines the domain pattern. <root>may not be referenced by any other rule in the grammar. - A <rule>variable may not refer to itself directly or indirectly. When rule A refers to rule B, which refers to rule C, which refers to rule A, a circular reference is created. Circular references are not permitted. - A <rule>variable is equal to one or more expressions. - Each expressionis separated by an OR, which is indicated using the pipe character"(|). - Expressions are examined one at a time. The first expressionto match is selected. No further expressions are examined. - The variable name may be composed of alphabetic, numeric, underscore (_) and hyphen (-). The name of the variable may start with any valid character. If the specified output field name does not conform to this form, use the alias feature to map the variable name to the output field. An expression may be any of the following types: - Another variable - A string consisting of one or more characters in single or double quotes. For example: "McDonald" 'McDonald' "O'Hara" 'O\'Hara' 'D"har' "D\"har" - Table - CompoundTable - RegEx commands
https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/DNM/source/OpenParser/WhatIsParsingGrammar.html
2022-01-16T19:15:33
CC-MAIN-2022-05
1642320300010.26
[array(['../../../Images/OpenParserTreeStructure.png', None], dtype=object)]
docs.precisely.com
. Generally this capsule is only in the build machine, it’s not used by the obfuscated scripts, and should not be distributed to the end users. Important The capsule may help others to hack the obfuscated scripts, please do not share your private capsuel to anyone else. Obfuscated Scripts¶ After the scripts are obfuscated by PyArmor, in the dist folder you find all the required files to run obfuscated scripts: dist/ myscript.py mymodule.py pytransform/ __init__.py _pytransform.so/.dll/.dylib Before v6.3, there are 2 extra files:) Super Obfuscated Scripts¶ If the scripts are obfuscated by Super Mode, it’s totaly different. There is only one runtime file, that is extension module pytransform. Only these files in the dist: myscript.py mymodule.py pytransform.so or pytransform.dll All the obfuscated scripts would be like this: from pytransform import pyarmor pyarmor(__name__, __file__, b'\x0a\x02...', 1) Or there is a suffix in extension name, for example: from pytransform_vax_000001 import pyarmor') Since v5.8.7, the runtime package may has a suffix. For example: from pytransform_vax_000001 import pyarmor_runtime pyarmor_runtime(suffix='_vax_000001') For Super Mode, not only the entry script, but also the other obfuscated scripts include one line Bootstrap Code: from pytransform import pyarmor 2 files in this package: pytransform/ __init__.py A normal python module _pytransform.so/.dll/.lib A dynamic library implements core functions Before v6.3.0, there are 2 extra files: pytransform.key Data file license.lic The license file for obfuscated scripts Before v5.7.0, the runtime package has another form Runtime Files For Super Mode, both runtime package and runtime files now refer to the extension module pytransform. In different platforms or different Python version, it has different name, for example: pytransform.pyd pytransform.so pytransform.cpython-38-darwin.so pytransform.cpython-38-x86_64-linux-gnu.so Runtime Files¶ They’re not in one package, but as 2 separated files: pytransform.py A normal python module _pytransform.so/.dll/.lib A dynamic library implements core functions Before v6.3.0, there are 2 extra files: pytransform.key Data file license.lic The license file for obfuscated scripts Obviously Runtime Package is more clear than Runtime Files. Since v5.8.7, the runtime package (module) may has a suffix, for example: pytransform_vax_000001/ __init__.py ... pytransform_vax_000001.py ... The License File for Obfuscated Script¶ There is a special runtime file license.lic, it’s required to run the obfuscated scripts. Since v6.3.0, it may be embedded into the dynamic library.. It’s a normal Python package named pytransform, and could be imported by Python import mechanism. If there are many packages or modules named pytransform, make sure the right package is imported by the obfuscated scripts. The runtime package for super mode is totally different from the one for non-super mode. The following notes are only apply to non-super mode The bootstrap code will load dynamic library _pytransform.so/.dll/.dylib by ctypes. This file is dependent-platform, all the prebuilt dynamic libraries list here Support Platforms obfsucated scripts are bind to Python major/minor version. For example, if it’s obfuscated by Python 3.6, it must run by Python 3.6. It doesn’t work for Python 3.5 The obfuscated scripts are platform-dependent, here list all the Support Platforms If Python interpreter is compiled with Py_TRACE_REFS or Py_DEBUG, it will crash to run obfuscated scripts. The callback function set by sys.settrace, sys.setprofile, threading.settraceand threading.setprofilewill be ignored by obfuscated scripts. Any module uses this feature will not work. Any module for example inspectmay not work if it try to visit the byte code, or some attributes of code objects in the obfuscated scripts. Pass the obfuscated code object by cPickleor any third serialize tool may not work. The obfuscated scripts duplicate the running frame, so sys._getframe([n])may get the different frame. If the exception is raised, the line number in the traceback may be different from the original script, especially this script has been patched by plugin script or cross protection code. The attribute __file__of code object in the obfuscated scripts will be <frozen name>other than real filename. So in the traceback, the filename is shown as <frozen name>. Note that module attribute __file__is still filename. For example, obfuscate the script foo.pyand run it: def hello(msg): print(msg) # The output will be 'foo.py' print(__file__) # The output will be '<frozen foo>' print(hello.__file__) In super mode, builtin functions dirs(), vars() don’t work if no argument, call it by this way: dirs() => sorted(locals().keys()) vars() => locals() Note that dirs(x), vars(x) still work if x is not None. About Third-Party Interpreter¶ About third-party interperter, for example Jython, and any embeded Python C/C++ code, they should satisfy the following conditions at least to run the obfuscated scripts: - They must be load offical Python dynamic library, which should be built from the soure , and the core source code should not be modified. - On Linux, RTLD_GLOBAL must be set as loading libpythonXY.so by dlopen, otherwise obfuscated scripts couldn’t work. Note Boost::python does not load libpythonXY.so with RTLD_GLOBAL by default, so it will raise error “No PyCode_Type found” as running obfuscated scripts. To solve this problem, try to call the method sys.setdlopenflags(os.RTLD_GLOBAL) as initializing. - The module ctypes must be exists and ctypes.pythonapi._handle must be set as the real handle of Python dynamic library, PyArmor will query some Python C APIs by this handle. PyPy could not work with pyarmor, it’s total different from CPython
https://pyarmor.readthedocs.io/en/latest/understand-obfuscated-scripts.html
2022-01-16T19:46:14
CC-MAIN-2022-05
1642320300010.26
[]
pyarmor.readthedocs.io
@appsignal/react Installation Add the @appsignal/react and @appsignal/javascript packages to your package.json. Then, run npm install/ yarn install. You can also add these packages to your package.json on the command line: Usage Error Boundary If you are using React v16 or higher, you can use the ErrorBoundary component to catch errors from anywhere in the child component tree. Legacy Boundary The API that this component uses is unstable at this point in React's development. We offer this component as a means to support those running React v15, but do not guarantee its reliablity. You are encouraged to use the ErrorBoundary whenever possible. The LegacyBoundary works in almost exactly the same way as the ErrorBoundary:
https://docs.appsignal.com/front-end/integrations/react.html
2021-10-16T05:34:22
CC-MAIN-2021-43
1634323583423.96
[]
docs.appsignal.com
. - In the Create Connection dialog, enter the connection information: - Name - Name of the connection from the notebook to a DSE cluster. - Host/IP (comma delimited) - The host names or IP addresses of the DSE cluster to connect to. All hosts must be in a single datacenter. Default: localhost. - Username - Optional. DSE username for logging in. - Optional. DSE password for logging in. - Port - IP connection port. Default: 9042. For example, to connect to a single-node DSE cluster on the local host using the default port. - Name - My First Connection - Host/IP 127.0.0.1 - Port - 9042 -.
https://docs.datastax.com/en/studio/6.0/studio/createConnectionNotebook.html
2021-10-16T06:45:45
CC-MAIN-2021-43
1634323583423.96
[]
docs.datastax.com
How Do I Use Friend Convert For Adding Facebook Friends? If the steps are followed correctly, you will have Friend Convert up and running to instantly help you send multiple friend requests on Facebook in a matter of seconds. Step# 1: Once you have set up and installed Friend Convert and selected the package of your choice, head over to your Facebook account. Select the […]
https://docs.friendconvert.net/category/knowledge-base/
2021-10-16T06:03:06
CC-MAIN-2021-43
1634323583423.96
[]
docs.friendconvert.net
Sections of the text concerned are highlighted in yellow – passing the cursor over these will bring up the notation. * Changes to Benzathine benzylpenicillin (Bicillin L-A) result from changes to the packaging and formulation made by the manufacturer (Pfizer) after the manuals went to print. The doses are unchanged but are now expressed in units/mL rather than mg.
https://docs.remotephcmanuals.com.au/review/g/manuals2017-manuals/d/20624.html
2021-10-16T06:28:06
CC-MAIN-2021-43
1634323583423.96
[]
docs.remotephcmanuals.com.au
<< executor_dispatch_only = <boolean> * Indicator that the action must be run on an executor. * If "1", the action can only be run on an executor node and cannot be dispatched to a remote instance. * If "0", the action can be dispatched and run on a remote instance. * Default: 1 episode_state_fetch_retries = <string> * The number of attempts to fetch episode state before running an action. * An attempt is made every second. * If you edit this value, make sure it is greater than the 'group_state_batch_delay' setting in itsi_rules_engine.properties. Otherwise episode state might not be available if the action rule is configured to run for the first event in an episode. * Default: 60 [ 18 September, 2019 This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.3.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/ITSI/4.3.0/Configure/notable_event_actions.conf
2021-10-16T07:01:17
CC-MAIN-2021-43
1634323583423.96
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Create File for Submission Health IT and data collection is integral in assisting the various arms of the healthcare industry to reduce costs while consistently improving patient care and outcomes. Organizations such as the U.S. Centers for Medicare and Medicaid Services (CMS), the Agency for Healthcare Research & Quality (AHRQ), or even the National Committee of Quality Assurance (NCQA), offer many tools and various programs designed to assist health IT vendors, clinical practices, and the like to ensure accurate data collection for analysis and critical decision making of value-based contract arrangements. the appropriate file generator for submission. In this example, we will use QPP JSON File). In this example, we will be navigating to the QPP portal, where we can upload the file(s), as appropriate. Check out CMS’ short video series to see how easy submitting to a quality program can be. Videos include: -
https://docs.webchartnow.com/functions/quality-of-care/create-file-for-submission.html
2021-10-16T05:01:47
CC-MAIN-2021-43
1634323583423.96
[]
docs.webchartnow.com
4. Lists & Tables¶ Table of Contents 4.1. Lists¶ 4.1.1. Enumerated Lists¶ Arabic numerals. lower alpha) (lower roman) upper alpha. upper roman) Lists that don’t start at 1: Three Four C D iii iv List items may also be auto-enumerated. 4.1.2. Definition Lists¶ -¶¶ -¶ 4.1.5.1. Simple¶ A simple list. There are no margins between list items. Simple lists do not contain multiple paragraphs. That’s a complex list. In the case of a nested list There are no margins between elements Still no margins Still no margins 4.1.5.2. Complex¶¶ 4.1.6. Hlists¶ Hlist with images 4.1.7. Numbered List¶¶ 4.2.1. Grid Tables¶ Here’s a grid table followed by a simple table:
https://sphinx-rtd-theme.readthedocs.io/en/0.5.2/demo/lists_tables.html
2021-10-16T06:16:39
CC-MAIN-2021-43
1634323583423.96
[]
sphinx-rtd-theme.readthedocs.io
Event formatters Event formatters are helpers classes to format event metadata for AppSignal transactions. In the AppSignal gem, event formatters are used to format event metadata from ActiveSupport::Notifications instrumentation events to AppSignal event metadata. When a block of code is instrumented by ActiveSupport::Notifications, AppSignal will record the event on the transactions, just like it would for the Appsignal.instrument helper. Event formatters allow the data to be passed to the ActiveSupport::Notifications.instrument method call to be formatted for AppSignal events. The metadata for the events formatted by the event formatters will be visible on performance incidents detail pages in the event timeline. Hover over a specific event and the on mouse hover pop-up will show details like the exact database being queried or the query that was executed. Note: If there are no other reasons to use ActiveSupport::Notifications instrumentation than AppSignal instrumentation, we recommend using the Appsignal.instrument helper for instrumentation. Using ActiveSupport::Notifications adds more overhead than directly calling AppSignal.instrument. No event formatter will be needed either, as AppSignal.instrument accepts the metadata to be set directly. Creating an event formatter An AppSignal event formatter is a class with one instance method, format. This format method receives the event payload Hash and needs to return an Array with three values. It's possible to add event formatter for libraries that use ActiveSupport::Notifications instrumentation, but look out that there's not already an event formatter registered for it. It's also possible to create an event formatter for your own events. When adding your own event names, please mind the event naming guidelines. Each event formatter receives an event metadata "payload" Hash from which the event formatter can format the metadata for the event in AppSignal. This AppSignal event metadata needs to be returned by the event formatter in this order in an Array: - An event title ( String) - A more descriptive title of an event, such as "Fetch current user"or "Fetch blog post comments". It will appear next to the event name in the event tree on the performance sample page to provide a little more context on what's happening. - An event body ( String) - More details such as the database query that was used by the event. - An event body format ( Integer) - Body format supports formatters to scrub the given data in the bodyargument to remove any sensitive data from the value. There are currently two supported values for the body_formatargument. Appsignal::EventFormatter::DEFAULT - This default value will indicate to AppSignal to leave the value intact and not scrub any data from it. Appsignal::EventFormatter::SQL_BODY_FORMAT - The SQL_BODY_FORMATvalue will indicate to AppSignal to run your data through the SQL sanitizer and scrub any values in SQL queries. Warning: the event formatter has no exception handling wrapped around it. If the custom event formatter raises an error, it will crash the web request or background job. Example event formatter Then when instrumenting a block of code, use the event name that's registered for your custom event formatter. Changes in gem 2.5 In AppSignal for Ruby gem version 2.5.2 some changes were made in how event formatters are registered. The old method of registering event formatters was deprecated in this release and will be removed in version 3.0 of the Ruby gem. The new method of registering EventFormatters will allow custom formatters to be registered after AppSignal has loaded. This allows EventFormatters to be registered in Rails initializers. In gem version 2.5.1 and older, it is possible to register an event formatter like the following example, calling the register method in the class itself. With the new setup the register call was extracted from the class itself, so it can instead be registered directly on the EventFormatter class. This also means your EventFormatters no longer need to be a subclass of the Appsignal::EventFormatter class.
https://docs.appsignal.com/ruby/instrumentation/event-formatters.html
2021-10-16T06:13:12
CC-MAIN-2021-43
1634323583423.96
[]
docs.appsignal.com
Introduction You are able to use the AutoPi REST API with your browser, which will display our auto-generated documentation portal. note The documentation is auto-generated from our API, which means that the documentation will always reflect the API and will always be up to date, but being auto-generated it unfortunately also does sacrifice some readability, but we are working to improve this. And if you find something that you feel is not adequately documented, please let us know. #Authentication If you want to test the endpoints, you can authenticate in the API documentation portal by setting the token to use when authenticating. In the picture above you can see there is a green 'Authorize' button which you need to press. This is where you'll be pasting your token. It is possible to authenticate using two different tokens. #API Tokens This token has an optional expiry date, and can be generated from the "Account" page in the frontend on the Cloud. It is specifically made for users who want to make requests to the API, and is the recommended way to make authenticated requests to the API. To use the token, send an authorization header like this, in all requests: Authorization: APIToken YOUR_TOKEN #JWT Token This token is the one used by the frontend when logging in. - It expires relatively shortly. - You need to enter you username and password to acquire the token. You can get the JWT token in two different ways: #1. Capture the token by using the browser developer tools. The easiest way is to capture the token by logging into the Cloud, with the developer tools open in your favourite browser, with the network tab open, and then skip to step 6 in the below step by step guide. #2. Manually call the auth endpoint to get the token. You can follow the steps below to call the login endpoint manually. - Click the "auth" app to fold out the available endpoints. - Click the "/auth/login/". - Click the "try it out" button to the right. - Change the payload to look like this (remove the username field, and fill out the email and password fields, like so: - Click the blue "execute" button. - Now you can copy the entire token. - Now click the green "Authorize" button in the top right of the page and paste the token in the field. Remember to write "Bearer" in front of the token - like so:.4pXwtyQKCwSrYfcj9O7MGVv5ustPbx0GmYY7jHZL8es - After clicking close, you should now be able to call the other authenticated endpoints. #Sending the requests manually using Postman or similar Alternatively, if you are unable to use the above portal, or if you'd rather use something like Postman or similar, you can still see the requests and parameters in the portal, but to call them manually, see the below steps. Authenticating manually w/o interactive API documentation portal You can do the above steps manually by following these steps: Authentication To obtain an authentication token, send a post request to with header Content-Type: application/json and body In the response, you will find the token used to authenticate the below requests To request data from our API, the authorization header should be set. You will need to set the "Authorization" header on the requests. To set the header, use the below values: # if you're using a JWT tokenAuthorization: Bearer YOUR_TOKEN # if you're using an API tokenAuthorization: APIToken YOUR_TOKEN Using developer tools to see how endpoints are used If you find something where you are unsure how to proceed, you can log in to my.autopi.io and use the developer tools of your favourite browser to see the requests and parameters sent by the application, and if you are still experiencing issues, you can send us an email to [email protected] Happy developing, and as always, if you run into issues, exceptions, have suggestions etc, please let us know. #Discussion If you'd like to discuss this topic with us or other fellow community members, you can do so on our community page dedicated for this guide: Guide: Getting started with the AutoPi REST API.
https://docs.autopi.io/guides/api/api-intro/
2021-10-16T04:40:49
CC-MAIN-2021-43
1634323583423.96
[array(['/assets/images/api_frontpage-d2e366bace40189a4d20d6cbd45324ac.jpg', 'api_frontpage'], dtype=object) ]
docs.autopi.io
AutoPi Logs In this guide we will talk about how you can manage your AutoPi's logs. The topics that we will cover are how you can view your device's logs and how you can download the log files to your computer. #Viewing logs There are two primary methods for viewing your AutoPi's logs. The first method uses some commands that you can write in the web terminal from the Cloud or the local admin for your device. The second method retrieves the log files directly from the device through SSH. #Viewing logs from the Cloud To retrieve the primary logs from the device, we have some terminal commands that can be executed (on my.autopi.io and local.autopi.io): $ minionutil.last_logs$ minionutil.last_errors$ minionutil.last_startup These functions can also take various parameters. You can check the documentation for those commands here or by running the following command: $ minionutil.help #Viewing logs directly on the device If you are logged onto the system using SSH (How to SSH to your device) you can view the log file by running the following command: $ less /var/log/salt/minion tip Remember that the timestamps in the log files are in UTC. #Downloading logs Sometimes, if we're having a back and forth on our support channel ([email protected]), we might ask you to provide some log files from your device. Most of the time, it will be the minion log file ( /var/log/salt/minion), but sometimes it might also be the syslog file ( /var/log/syslog). There are three ways that you can download log files from your device: downloading the files from the local admin page (local.autopi.io), copying it to your own computer with the scp command or by uploading it to your dropbox account. #Local admin page download To download the log files from the local admin page, you will need to connect to the device's WiFi hotspot first. After you've done that, you can navigate your browser to. tip If the browser is unable to load the web page because it can't resolve the URL, try typing in the IP address instead:. Once you've opened the local admin page, on the right-hand side, you should see a list of the log files available for downloading. Click on the one you'd like to download. #SCP (SSH copy) You are able to download the minion log file using the scp command from your computer. First, you'll need to have the file in the home directory of the pi user. The first two commands should be run directly on the device (through SSH) and the last one should be run from your own computer. # copy the file to the home directory$ sudo cp /var/log/salt/minion /home/pi # make sure the pi user owns the file$ sudo chown pi:pi /home/pi/minion # finally, exit SSH and run the scp command from your own computerscp [email protected]:~/minion ./ After executing those commands, you should have a minion file in your current working directory. #Uploading the log file to your dropbox If you're not familiar with SSH or you don't have access to it at the moment, you can instead upload the log file to your dropbox account. Here are the steps you need to take to get the log file uploaded: Create a Dropbox app by going here and input like so: Now click the "Generate" button under Generated access token. You can now use the below commands to send files to your new dropbox folder located in "dropbox/Apps/AutoPi Logfiles". Execute them in the web terminal, or in the SSH terminal by prepending autopibefore the command. note Remember to replace the YOUR_ACCESS_TOKEN with the actual token you received in the last step. # web terminal$ fileutil.upload /var/log/salt/minion gzip=True service=dropbox token=YOUR_ACCESS_TOKEN # SSH$ autopi fileutil.upload /var/log/salt/minion gzip=True service=dropbox token=YOUR_ACCESS_TOKEN You should now be able to see the uploaded file in your dropbox folder. You can also upload the file to dropbox using raw Linux commands. Here are the commands: cmd.run 'gzip --keep -f /var/log/salt/minion' And then run the following command to upload the data: cmd.run 'curl -X POST \--header "Authorization: Bearer YOUR_ACCESS_TOKEN" \--header "Dropbox-API-Arg: {\"path\": \"/minion.gz\"}" \--header "Content-Type: application/octet-stream" \--data-binary @/var/log/salt/minion.gz' NOTE: If the above command does not work, you can try this one instead (Same command on a single line, without the slashes) cmd.run 'curl -X POST --header "Authorization: Bearer YOUR_ACCESS_TOKEN" --header "Dropbox-API-Arg: {\"path\": \"/minion.gz\"}" --header "Content-Type: application/octet-stream" --data-binary @/var/log/salt/minion.gz' #Log rotation Every so often the logs on your AutoPi device will rotate. This essentially means that the current log files will be compressed and renmaed so that a new log file can start being used. This is done in order to keep the size of the log files relatively small. /var/log/salt/minion# By default, the salt minion log file is being rotated every week. Also by default, there will be 7 older versions of the log file that will be kept on the SD card before being removed. You can double check those defaults if you read the /etc/logrotate.d/salt-common file which has those definitions. /var/log/syslog# By default, the syslog file is being rotated every day. Also by default, there will be 7 older versions of the log file that will be kept on the SD card before being removed. You can double check those defaults if you read the /etc/logrotate.d/rsyslog file which has those definitions. #Discussion If you'd like to discuss this topic with us or other fellow community members, you can do so on our community pages dedicated for this guide: Guide: How to retrieve logs from your device and Tip: Send device logfile to dropbox with one command.
https://docs.autopi.io/guides/autopi-logs/
2021-10-16T05:47:36
CC-MAIN-2021-43
1634323583423.96
[array(['/assets/images/local_admin_log_files-b467a05a875bdef5f151653b5bae294b.jpg', 'local_admin_log_files'], dtype=object) ]
docs.autopi.io
Using the AutoPi with an external power supply The device you have is pre-configured to work directly in your car. When working it in a lab environment you may need to supply the AutoPi with a controlled power supply. There are a couple of things you should be aware of, before connecting your device: Always power your AutoPi from the OBD connector. Don't use the power inputs on the Raspberry Pi. This makes some of the function on the AutoPi not function correctly and could potentially damage your AutoPi device. The AutoPi has several commands that can be run from the local terminal. Commands like power.statusshould be written out on the local terminal as you see them in our commands documentation. All terminal commands can also be run when logged in via SSH. All you need to do is prepend the autopicommand, like autopi power.status. This works for every AutoPi Core command. The AutoPi auto powers down when voltage is below 12.2V. This is to prevent draining the vehicle battery. Power it with at least 12.5V when in a lab. There is a trigger (sleep timer) on the device, that will initiate if there is no communication to the car, more specifically, its CAN bus. You can see sleep timers by running the power.sleep_timercommand and you can clear them by running power.sleep_timer clear=*. We recommend getting the OBD power cable from our shop, to ease the connection to any external power supply. You can get it here. #Discussion If you'd like to discuss this topic with us or other fellow community members, you can do so on our community page dedicated for this guide: Using the AutoPi with an external power supply.
https://docs.autopi.io/guides/using-the-autopi-with-an-external-power-supply/
2021-10-16T06:53:48
CC-MAIN-2021-43
1634323583423.96
[]
docs.autopi.io
TagUser Adds one or more tags to an IAM user. requesting user Parameters For information about the parameters that are common to all actions, see Common Parameters. - Tags.member.N The list of tags that you want to attach to the IAM user. Each tag consists of a key name and an associated value. Type: Array of Tag objects Array Members: Maximum number of 50 items. Required: Yes - UserName The name of the IAM user to which you want to add tags.. - ConcurrentModification The request was rejected because multiple requests to change this object were submitted simultaneously. Wait a few minutes and submit your request again. The following example is formatted with line breaks for legibility. The following example shows how to add tags to an existing user.1747Z Authorization: <auth details> Content-Length: 99 Content-Type: application/x-www-form-urlencoded Action=TagUser&Version=2010-05-08&UserName=anika UserResponse xmlns=""> <ResponseMetadata> <RequestId>EXAMPLE8-90ab-cdef-fedc-ba987EXAMPLE</RequestId> </ResponseMetadata> </TagUserResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/IAM/latest/APIReference/API_TagUser.html
2021-10-16T06:57:30
CC-MAIN-2021-43
1634323583423.96
[]
docs.aws.amazon.com
Deploy models with REST (preview) Learn how to use the Azure Machine Learning REST API to deploy models (preview). REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation. In this article, you learn how to use the new REST APIs to: - Create machine learning assets - Create a basic training job - Create a hyperparameter tuning sweep job Prerequisites - An Azure subscription for which you have administrative rights. If you don't have such a subscription, try the free or paid personal subscription. - An Azure Machine Learning workspace. - A service principal in your workspace. Administrative REST requests use service principal authentication. - A service principal authentication token. Follow the steps in Retrieve a service principal authentication token to retrieve this token. - The curl utility. The curl program is available in the Windows Subsystem for Linux or any UNIX distribution. In PowerShell, curl is an alias for Invoke-WebRequest and curl -d "key=val" -X POST uribecomes Invoke-WebRequest -Body "key=val" -Method POST -Uri uri. Set endpoint name Note Endpoint names need to be unique at the Azure region level. For example, there can be only one endpoint with the name my-endpoint in westus2. export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>" Azure Machine Learning managed online endpoints Managed online endpoints (preview) allow you to deploy your model without having to create and manage the underlying infrastructure. In this article, you'll create an online endpoint and deployment, and validate it by invoking it. But first you'll have to register the assets needed for deployment, including model, code, and environment. There are many ways to create an Azure Machine Learning online endpoints including the Azure CLI, and visually with the studio. The following example a managed online endpoint with the REST API. Create machine learning assets First, set up your Azure Machine Learning assets to configure your job. In the following REST API calls, we use SUBSCRIPTION_ID, RESOURCE_GROUP, LOCATION, and WORKSPACE as placeholders. Replace the placeholders with your own values. Administrative REST requests a service principal authentication token. Replace TOKEN with your own value. You can retrieve this token with the following command: TOKEN=$(az account get-access-token --query accessToken -o tsv) The service provider uses the api-version argument to ensure compatibility. The api-version argument varies from service to service. The current Azure Machine Learning API version is 2021-03-01-preview. Set the API version as a variable to accommodate future versions: API_VERSION="2021-03-01-preview" Get storage account details To register the model and code, first they need to be uploaded to a storage account. The details of the storage account are available in the data store. In this example, you get the default datastore and Azure Storage account for your workspace. Query your workspace with a GET request to get a JSON file with the information. You can use the tool jq to parse the JSON result and get the required values. You can also use the Azure portal to find the same information: # Get values for storage account response=$(curl --location --request GET "" \ --header "Authorization: Bearer $TOKEN") AZUREML_DEFAULT_DATASTORE=$(echo $response | jq -r '.value[0].name') AZUREML_DEFAULT_CONTAINER=$(echo $response | jq -r '.value[0].properties.contents.containerName') export AZURE_STORAGE_ACCOUNT=$(echo $response | jq -r '.value[0].properties.contents.accountName') Get the storage key: AZURE_STORAGE_KEY=$(az storage account keys list --account-name $AZURE_STORAGE_ACCOUNT | jq '.[0].value') Upload & register code Now that you have the datastore, you can upload the scoring script. Use the Azure Storage CLI to upload a blob into your default container: az storage blob upload-batch -d $AZUREML_DEFAULT_CONTAINER/score -s endpoints/online/model-1/onlinescoring Tip You can also use other methods to upload, such as the Azure portal or Azure Storage Explorer. Once you upload your code, you can specify your code with a PUT request and refer to the datastore with datastoreId: curl --location --request PUT "" \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/json" \ --data-raw "{ \"properties\": { \"description\": \"Score code\", \"datastoreId\": \"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/datastores/$AZUREML_DEFAULT_DATASTORE\", \"path\": \"score\" } }" Upload and register model Similar to the code, Upload the model files: az storage blob upload-batch -d $AZUREML_DEFAULT_CONTAINER/model -s endpoints/online/model-1/model Now, register the model: curl --location --request PUT "" \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/json" \ --data-raw "{ \"properties\": { \"datastoreId\":\"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/datastores/workspaceblobstore\", \"path\": \"model/sklearn_regression_model.pkl\", } }" Create environment The deployment needs to run in an environment that has the required dependencies. Create the environment with a PUT request. Use a docker image from Microsoft Container Registry. You can configure the docker image with Docker and add conda dependencies with condaFile. In the following snippet, the contents of a Conda environment (YAML file) has been read into an environment variable: ENV_VERSION=$RANDOM curl --location --request PUT "" \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/json" \ --data-raw "{ \"properties\":{ \"condaFile\": \"$CONDA_FILE\", \"Docker\": { \"DockerSpecificationType\": \"Image\", \"DockerImageUri\": \"mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1\" } } }" Create endpoint Create the online endpoint: response=$(curl --location --request PUT "" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $TOKEN" \ --data-raw "{ \"identity\": { \"type\": \"systemAssigned\" }, \"properties\": { \"authMode\": \"AMLToken\", \"traffic\": { \"blue\": 100 } }, \"location\": \"$LOCATION\" }") Create deployment Create a deployment under the endpoint: response=$(curl --location --request PUT "" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $TOKEN" \ --data-raw "{ \"location\": \"$LOCATION\", \"properties\": { \"endpointComputeType\": \"Managed\", \"scaleSettings\": { \"scaleType\": \"Manual\", \"instanceCount\": 1, \"minInstances\": 1, \"maxInstances\": 2 }, \"model\": { \"referenceType\": \"Id\", \"assetId\": \"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/models/sklearn/versions/1\" }, \"codeConfiguration\": { \"codeId\": \"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/codes/score-sklearn/versions/1\", \"scoringScript\": \"score.py\" }, \"environmentId\": \"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/environments/sklearn-env/versions/$ENV_VERSION\", \"InstanceType\": \"Standard_F2s_v2\" } }") Invoke the endpoint to score data with your model We need the scoring uri and access token to invoke the endpoint. First get the scoring uri: response=$(curl --location --request GET "" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $TOKEN") scoringUri=$(echo $response | jq -r '.properties' | jq -r '.scoringUri') Get the endpoint access token: response=$(curl -H "Content-Length: 0" --location --request POST "" \ --header "Authorization: Bearer $TOKEN") accessToken=$(echo $response | jq -r '.accessToken') Now, invoke the endpoint using curl: curl --location --request POST $scoringUri \ --header "Authorization: Bearer $accessToken" \ --header "Content-Type: application/json" \ --data-raw @endpoints/online/model-1/sample-request.json Check the logs Check the deployment logs: curl --location --request POST "" \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/json" \ --data-raw "{ \"tail\": 100 }" Delete the endpoint If you aren't going use the deployment, you should delete it with the below command (it deletes the endpoint and all the underlying deployments): curl --location --request DELETE "" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $TOKEN" || true Next steps - Learn how to deploy your model using the Azure CLI. - Learn how to deploy your model using studio. - Learn to Troubleshoot managed online endpoints deployment and scoring (preview) - Learn how to Access Azure resources with a managed online endpoint and system-managed identity (preview) - Learn how to monitor online endpoints. - Learn Safe rollout for online endpoints (preview). - View costs for an Azure Machine Learning managed online endpoint (preview). - Managed online endpoints SKU list (preview). - Learn about limits on managed online endpoints in Manage and increase quotas for resources with Azure Machine Learning.
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-rest
2021-10-16T06:50:45
CC-MAIN-2021-43
1634323583423.96
[]
docs.microsoft.com
Table of Contents Product Index Avast, an awesome pirate treasure! PW Pirate Treasure is great for making your scene rich! This Treasure Chest comes filled with loot, plus tons of props to create your treasure scene, like crowns, a dagger, gun, and fine golden wares… Get the PW Pirate Treasure for your successful bucc.
http://docs.daz3d.com/doku.php/public/read_me/index/71677/start
2021-10-16T06:37:15
CC-MAIN-2021-43
1634323583423.96
[]
docs.daz3d.com
Date: Sun, 01 Oct 2006 08:55:46 -0400 From: Chuck Swiger <[email protected]> To: Michael Dreiding <[email protected]> Cc: [email protected] Subject: Re: Problems with installation Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help Michael Dreiding wrote: > I have downloaded the AMD64 version 6.1 > Every time I start with the boot disk loader, I get a > menu with 7 options. Whenever I select 1 through 5 > (Boot FreeBSD . . .) my laptop shuts down. > > I am running on a Laptop AMD64 3400+ > > What do I need to do to get this to install? Try looking for a BIOS update for your laptop, and try disabling what you can in the BIOS config, and/or turning the disk & CD/DVD-ROM drive from UDMA to PIO temporarily. You might also try booting from a 32-bit x86 version of FreeBSD and see whether that does any better. More info about your hardware would also be helpful... -- -Chuck Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=92981+0+/usr/local/www/mailindex/archive/2006/freebsd-questions/20061008.freebsd-questions
2021-10-16T07:00:20
CC-MAIN-2021-43
1634323583423.96
[]
docs.freebsd.org
Scanning A Drawing For Optimal Raster To Vector Conversion Results How to scan a drawing for raster to vector conversion Not all drawings can be scanned to create a raster image that can be used for raster to vector conversion. For example: - Some drawings are so faint or so dirty that whatever you do you will not be able to create a clean enough scan for conversion. - Some drawings or drawing details are too small to scan well enough for vectorization, regardless of the scanning resolution you use. - Some drawings contain so many overlapping details – for example text written over drawing lines – that even if you get a perfect scan no raster to vector converter will be able to unscramble the information. However, given a suitable drawing in good enough condition to scan well, you can eliminate many raster to vector conversion problems by being aware of the information on this page. Color, grayscale or monochrome? Most scanners give you the option of scanning in color, grayscale or monochrome. These options have different names depending on the make of scanner you have. Color Your scanner’s color option will normally create a raster image that contains 16.7 million colors. You should only use this option if you are scanning a color drawing with a view to converting it to a color DXF file. Do not use your scanner’s color option if you are scanning a black and white drawing – it is easy to do this by accident as most scanners default to color. If you are scanning a color drawing with a view to converting it to a color DXF file, experiment with your scanner’s settings until the colors on the raster image are as high contrast, vibrant and saturated as possible. Warning: Color images can be very large. An E/A0 size drawing scanned in color at 300 dpi will take up about 385Mb of memory. Grayscale Your scanner’s grayscale option (often called black and white photo) will normally create an image that contains 256 shades of gray. Grayscale images are not normally suitable for raster to vector conversion. You should only select grayscale if you are going to convert the grayscale image to black and white after scanning using Scan2CAD’s Threshold functions. See the Scan2CAD Help for more information on Simple and Adaptive Thresholds. Warning: Grayscale images can be very large. An E/A0 size drawing scanned in grayscale at 300 dpi will take up about 128Mb of memory. Monochrome Your scanner’s monochrome option (often called line art, black and white drawing or 1 bit) will create a much smaller image that contains two colors – black and white. This is the option you should normally choose when scanning a drawing for raster to vector conversion. Thresholding When you scan a drawing in monochrome your scanner or scanning software has to make a decision about which parts of the drawing to set to black in the raster image and which to set to white. This is called thresholding. If your drawing is clean and sharp this is not normally a problem. However if your drawing has faint lines or a dirty or tinted background you will have to experiment with your scanner’s settings until you get a raster image where, as far as possible, the parts of the raster image that are supposed to be black are black and the parts that are supposed to be white are white. If your scanner or scanning software sets too much of the drawing to white, it may contain breaks and holes and faint parts may be lost. If your scanner or scanning software sets too much of the drawing to black, text characters may “bleed” so that white spaces within them or between them become filled and speckles and dirt may appear in the background.While some scanners have good automatic thresholding and / or have software that makes setting an appropriate threshold easy, getting the best threshold on other scanners requires endless rescans.If this is the case with your scanner, you may find it easier to scan your drawing in grayscale. You can then use Scan2CAD’s Threshold functions to create a black and white image after scanning. This will allow you to experiment with different levels of black and white without having to rescan the drawing. Resolution It is not true that “the higher the scanner resolution, the better the vectorization results”. In fact, a high resolution scan can sometimes give you worse results than a low resolution scan! That said, you should be aware that while you can decrease the resolution of an image after scanning you cannot increase it. Increasing resolution after scanning will not regain any lost detail. It will simply exacerbate “steps” in the image that will decrease the quality of any raster to vector conversion. Therefore, it is better to err on the side of too high resolution rather than too low resolution when scanning. If you find your scan resolution is too high you can always decrease it after the fact using Scan2CAD’s File Menu > Raster > Statistics dialog. For most drawings, a scan resolution of 200 to 400 dpi is optimal. However, if a drawing is small (e.g. a logo) or has fine detail, you may need a higher resolution. Here are some pointers for choosing the right resolution: - If you are scanning a line drawing aim for lines about 5 pixels thick. - Lines and outlines should look smooth, not stepped: - Text characters and entities that are close together should be separated by clean white space: Note that the separation of close together entities is dependent on selecting an appropriate threshold (see above) as well as on selecting an appropriate resolution. Saving raster images We recommend that you save your scanned drawings as TIFF files. If your scanned drawing is black and white, save it as a Group 4 TIFF file. This will compress the file without causing a loss in its quality. Do not save your scans as multi layer/page TIFF files, which Scan2CAD does not support. DO NOT save your images as JPEG. JPEG uses “lossy compression”, which means that it discards data it thinks you can do without. This causes it to decrease the quality of scanned drawings by blurring the details and adding speckle artifacts. The smudging and gray “clouds” surrounding the lines in the image below are typical artifacts caused by saving a drawing as JPEG. Once you have damaged an image by saving it as JPEG, you cannot undo the damage by simply converting the JPEG image to TIFF. You will need to rescan the drawing. VERY IMPORTANT: CHECK YOUR SCAN! After scanning, check your scan. - Make sure that the full extents of the drawing have been captured. - Make sure the scan is not skew.If the scan is skew, rescan the drawing straight. While Scan2CAD can deskew scans, deskewing can decrease the quality of the scan, particularly if the scan is very skew. - Make sure that any text is legible. - Make sure that text characters and entities that are close together are separated by clean white space. If they touch partially or completely, you need to experiment with your threshold settings and or scanning resolution. - Make sure that the drawing lines are solid, not broken. If they are broken, you need to experiment with your threshold settings and or scanning resolution.
https://docs.scan2cad.com/article/76-scanning-a-drawing-for-optimal-raster-to-vector-conversion-results
2021-10-16T05:56:51
CC-MAIN-2021-43
1634323583423.96
[array(['https://www.scan2cad.com/wp-content/uploads/2011/03/image_jpeg.gif', None], dtype=object) ]
docs.scan2cad.com
Unified Remix - VOD¶ Create unique video-on-demand streams Add rate cards, promos and pre-rolls using just-in-time content editing. Remix VOD enables you to seamlessly stitch together assets from different sources, without the requirement to repackage or re-encode them. Key. Table of contents
https://docs.unified-streaming.com/documentation/remix/vod/index.html
2021-10-16T06:49:15
CC-MAIN-2021-43
1634323583423.96
[]
docs.unified-streaming.com
Create Vital Signs File for Import This document explains how to import historical vital signs for employees into WebChart . What you will need: - Spreadsheet software (Microsoft Excel or Google Sheets) - Vital Signs CSV File Example WebChart end user with administrative rights Create Vital Signs Vital Signs CSV API Specification. - Using the table, above, determine the data to be imported. Starting with the Employee ID field, enter all required and desired data, verifying required data is present. Note that each row represents an employee record. Below is a screenshot of the Vital Signs CSV File Example for guidance. - Save the file as CSV format. Upload Vital Signs CSV File - Login as a user with administrator rights. - Navigate to the Control Panel from the side menu. - Select the Data Import tab. - Select Chart Data CSV API from the drop-down menu and click Go. - Select the Vital Signs CSV File and click Upload. Tip For extra information on what happens during the import, click the Verbose checkbox. Resources - Vital Signs CSV API Specification - Vital Signs CSV File Example - Validation script for Vital Signs data Troubleshooting Ensure that all of the fields marked as Required have valid content. The file uploaded must be saved as a CSV formatted file. In case of any errors, contact Technical Support.
https://docs.webchartnow.com/functions/system-administration/data-migration/create-vital-signs-file-for-import.html
2021-10-16T04:47:19
CC-MAIN-2021-43
1634323583423.96
[array(['create-vital-signs-file-for-import.images/image3.png', None], dtype=object) array(['create-vital-signs-file-for-import.images/image1.png', None], dtype=object) array(['create-vital-signs-file-for-import.images/image4.png', None], dtype=object) array(['create-vital-signs-file-for-import.images/image2.png', None], dtype=object) ]
docs.webchartnow.com
Date: Thu, 03 Oct 1996 17:39:01 -0700 From: "Jordan K. Hubbard" <[email protected]> To: [email protected] Cc: [email protected], [email protected] Subject: Re: help find compiler, please. Message-ID: <[email protected]> In-Reply-To: Your message of "Thu, 03 Oct 1996 11:50:59 CST." <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help > We have searched in vain for a C++ compiler on FreeBSD/Linux that is > robust enought to be able to compile Rogue Wave class templates. g++ > just isn't there yet. We did find one other compiler, and it failed > also. If you know of one that will do it, let us know. I'm afraid that gcc is the only game in town these days, and that's not likely to change unless some commercial enterprise decides to donate their compiler technology. What version of gcc did you last try with? Jordan Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=661198+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19960929.freebsd-questions
2021-10-16T07:06:27
CC-MAIN-2021-43
1634323583423.96
[]
docs.freebsd.org
>>>] [<app.. Appender expression options - appender_name - Syntax: <string> - Description: The name of an appender from the log-searchprocess.cfgfile. Use a wildcard *to identify all appenders in the Manual. This documentation applies to the following versions of Splunk Cloud Platform™: 8.2.2109, 8.0.2006, 8.0.2007, 8.1.2009, 8.1.2011, 8.1.2012, 8.1.2101, 8.1.2103, 8.2.2104, 8.2.2105 (latest FedRAMP release), 8.2.2106, 8.2.2107 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/SplunkCloud/8.2.2105/SearchReference/Noop
2021-10-16T06:33:46
CC-MAIN-2021-43
1634323583423.96
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Note: Monitoring Windows and Hyper-V host targets is now part of SQL Sentry. See the SQL Sentry product page for more details about subscription pricing. SQL Sentry supports monitoring Windows targets even if the target doesn't have an active SQL Server installation. This gives you the ability to independently monitor any Windows target, such as a web server. In this case, SQL Sentry delivers a complete historical record of which processes are consuming which resources. If you're familiar with SQL Sentry, you'll recognize the Windows charts on the dashboard. Several enhancements have been made to the dashboard with Windows-only monitoring in mind. The System Memory and CPU Usage charts contain visual representations for several different well-known process groups, including groups for SSRS, SSIS, and IIS. If you have a specialized group of applications, SQL Sentry gives you the ability to define your own well-known process groups. Note: SQL Sentry provides additional details for Hyper-V hosts. The images below highlight some differences such as the (VM) charts on the right side of the dashboard. See the Hyper-V Host Metrics section of the Performance Metrics article for more information. The Processes tab contains a grid view of all the processes that you're collecting information about, including related metrics. By default, processes are shown in their well-known process groups, giving you a complete picture of how application groups are consuming resources within your environment. Processes are also auto-correlated with related services. Adding a Windows Target To monitor a Windows computer with SQL Sentry, add a Windows target using Add > Target in the right-click context menu of the following Navigator pane nodes: - All Targets - Site - Group You can also add a target through the File menu. From the Add Target dialog box, select Windows Computer from the Target Type drop-down menu, enter the desired target, and then select Connect. SQL Sentry also provides you with the following additional features: Important: Monitor the individual Windows machines that are part of a Windows Cluster. SQL Sentry isn't cluster aware.
https://docs.sentryone.com/help/windows-hyper-v-sql-sentry-overview
2021-10-16T05:21:17
CC-MAIN-2021-43
1634323583423.96
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e3da2b96e121ccc3fbcd842/n/sentryone-file-new-target-menu-option-200.png', 'SQL Sentry File Menu Target Version 20.0 File Menu Target'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e3da2c28e121c56475eb530/n/sentryone-add-target-window-windows-computer-200.png', 'SQL Sentry Add Target Dialog Box Version 20.0 Add Target Dialog Box'], dtype=object) ]
docs.sentryone.com
Utilities. This name saPAce contains utility functions for TTL. Point and vector algebra such as scalar product and cross product between vectors are implemented here. These functions are required by functions in the ttl namesaPAce, where they are assumed to be present in the TTLtraits class. Thus, the user can call these functions from the traits class. For efficiency reasons, the user may consider implementing these functions in the the API directly on the actual data structure; see api. Cross product between two 2D vectors. (The z-component of the actual cross product.) Returns: Definition at line 102 of file ttl_util.h. Referenced by ttl::TRIANGULATION_HELPER::ConvexBoundary(), hed::TTLtraits::CrossProduct2D(), ttl::TRIANGULATION_HELPER::degenerateTriangle(), ttl::TRIANGULATION_HELPER::InTriangle(), ttl::TRIANGULATION_HELPER::SwappableEdge(), and ttl::TRIANGULATION_HELPER::SwapTestDelaunay(). Returns a positive value if the 2D nodes/points aPA, aPB, and aPC occur in counterclockwise order; a negative value if they occur in clockwise order; and zero if they are collinear. Definition at line 117 of file ttl_util.h. Referenced by hed::TTLtraits::Orient2D(). Scalar product between two 2D vectors. Returns: Definition at line 89 of file ttl_util.h. Referenced by ttl::TRIANGULATION_HELPER::InTriangle(), hed::TTLtraits::ScalarProduct2D(), and ttl::TRIANGULATION_HELPER::SwapTestDelaunay().
http://docs.kicad-pcb.org/doxygen/namespacettl__util.html
2018-03-17T16:38:49
CC-MAIN-2018-13
1521257645248.22
[]
docs.kicad-pcb.org
Pirate Forms Documentation the 'Pirate Forms' widget. How to Install After you have purchased the plugin go to Purchase History to download the Pirate Forms. Configure Pirate Forms You can configure Pirate Forms from Settings> Pirate Forms, in your WordPress dashboard. Options You can access all the form options from Settings > Pirate Forms > Options. There you have the following options: - Contact notification sender email: Email to use for the sender of the contact form emails both to the recipients below and the contact form submitter (if this is activated below). The domain for this email address should match your site's domain. Insert [email] to use the contact form submitter's email. - Contact submission recipients: Email address(es) to receive contact submission notifications. You can separate multiple emails with a comma. - Store submissions in the database: Should the submissions be stored in the admin area? If chosen, contact form submissions will be saved in Contacts on the left (appears after this option is activated). - Add a nonce to the contact form: Should the form use a WordPress nonce? This helps reduce spam by ensuring that the form submitter is on the site when submitting the form rather than submitting remotely. This could, however, cause problems with sites using a page caching plugin. Turn this off if you are getting complaints about forms not being able to be submitted with an error of "Nonce failed!" - Send email confirmation to form submitter: Adding text here will send an email to the form submitter. The email uses the "Successful form submission text" field from the "Alert Messages" tab as the subject line. Plain text only here, no HTML. - "Thank You" URL: Select the post-submit page for all forms submitted. Fields Settings In Field Settings, you can manage all the fields of your form, as well as add a reCaptcha to it. It has following options: - Name: Do you want the name field to be displayed? If yes then you can also set whether or not to make it required. - Email Address: Do you want the email address field to be displayed? If yes then you can also set whether or not to make it required. - Subject: Do you want the subject field to be displayed? If yes then you can also set whether or not to make it required. - Message: Do you want the message field to be displayed? If yes then you can also set whether or not to make it required. - Add a reCAPTCHA: You can add Google's reCAPTCHA to your form to prevent spam submissions. If selected, you are required to fill Site & Secret keys. - Site & Secret keys: Create an account here to get the Site key and the Secret key for the reCaptcha. - Add an attachment field: Do you want an attachment field to be displayed? Fields Labels & Alert Messages In Fields Labels, you can put the labels that you want for your fields. It includes: Name, Email, Subject, Message & Submit Button. While in Alert Messages, you can fill in the alerts which will appear when the required form fields aren't filled. Also, you can select a text for successful form submission message. SMTP Options SMTP is a communication protocol for mail servers to transmit email over the Internet. We highly recommend you to contact your hosting provider to ask your SMTP details. It has the following fields: - Use SMTP to send emails?: Choose this if you want to send emails over SMTP Instead of PHP mail function. - SMTP Host: Your SMTP host, ask your hosting provider for more details. - SMTP Port: Your SMTP port, ask your hosting provider for more details. - Use SMTP Authentication?: If you check this box, make sure the SMTP Username and SMTP Password are completed. - SMTP Username: Your SMTP username, ask your hosting provider for more details. - SMTP Password: Your SMTP password, ask your hosting provider for more details. Using Pirate Forms There are 3 ways of using your newly created form: Adding a widget: You can add Pirate Forms widget to your theme from Appearance > Widgets. Make sure your theme has a registered sidebar. Using a shortcode: Pirate Forms can also be added to any post or page using the [pirate_forms] shortcode. Inside your theme: If you wanna call Pirate Forms inside your theme files, then it can be called using do_shortcode function by putting the following line in your theme files: <?php echo do_shortcode( '[pirate_forms]' ) ?> Are you enjoying Pirate Forms? Rate our plugin on WordPress.org. We'd really appreciate it!
https://docs.themeisle.com/article/436-pirate-forms-documentation
2018-03-17T16:08:33
CC-MAIN-2018-13
1521257645248.22
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/570cefe29033602796674b5d/file-eoAVj2aWs5.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/57c8c528c69791083999f214/file-gz0s0rO5G8.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/57c8c6aa903360649f6e447d/file-5W25bDMekf.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/57c8c893903360649f6e4480/file-1REZixNsZH.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/57c8c9afc69791083999f216/file-qdjjESFoOM.png', None], dtype=object) ]
docs.themeisle.com
Developers’ guide¶ These instructions are for developing on a Unix-like platform, e.g. Linux or Mac OS X, with the bash shell. If you develop on Windows, please get in touch. Mailing lists¶ General discussion of Elephant development takes place in the NeuralEnsemble Google group. Discussion of issues specific to a particular ticket in the issue tracker should take place on the tracker. Using the issue tracker¶ If you find a bug in Elephant, Elephant, create a ticket with type “enhancement”. If you already have an implementation of the idea, open a pull request. Requirements¶ See Prerequisites / Installation. We strongly recommend using virtualenv or similar. Getting the source code¶ We use the Git version control system. The best way to contribute is through GitHub. You will first need a GitHub account, and you should then fork the repository at (see). To get a local copy of the repository: $ cd /some/directory $ git clone [email protected]:<username>/elephant.git Now you need to make sure that the elephant package is on your PYTHONPATH. You can do this by installing Elephant: $ cd elephant $ python setup.py install $ python3 setup.py install but if you do this, you will have to re-run setup.py install any time you make changes to the code. A better solution is to install Elephant with the develop option, this avoids reinstalling when there are changes in the code: $ python setup.py develop or: $ pip install -e . To update to the latest version from the repository: $ git pull Running the test suite¶ Before you make any changes, run the test suite to make sure all the tests pass on your system: $ cd elephant/test With Python 2.7 or 3.x: $=elephant --cover-erase Working on the documentation¶ The documentation is written in reStructuredText, using the Sphinx documentation system. To build the documentation: $ cd elephant/doc $ make html Then open some/directory/elephant/doc/_build/html/index.html in your browser. Docstrings should conform to the NumPy docstring standard. To check that all example code in the documentation is correct, run: $ make doctest To check that all URLs in the documentation are correct, run: $ make linkcheck Committing your changes¶ Once you are happy with your changes, run the test suite again to check that you have not introduced any new bugs. Elephant repository, open a pull request on GitHub (see). Python 3¶ Elephant should work with Python 2.7 and Python 3.. Coding standards and style¶ All code should conform as much as possible to PEP 8, and should run with Python 2.7 and 3.2-3.5. Making a release¶ First, check that the version string (in elephant/__init__.py, setup.py, doc/conf.py, and doc/install.rst) is correct. Second, check that the copyright statement (in LICENCE.txt, README.md, and doc/conf.py) is correct. To build a source package: $ python setup.py sdist To upload the package to PyPI (if you have the necessary permissions): $ python setup.py sdist upload Finally, tag the release in the Git repository and push it: $ git tag <version> $ git push --tags upstream
http://elephant.readthedocs.io/en/latest/developers_guide.html
2018-03-17T16:26:54
CC-MAIN-2018-13
1521257645248.22
[]
elephant.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Creates a new configuration recorder to record the selected resource configurations. You can use this action to change the role roleARN and. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to PutConfigurationRecorderAsync. Namespace: Amazon.ConfigService Assembly: AWSSDK.ConfigService.dll Version: 3.x.y.z Container for the necessary parameters to execute the Put
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ConfigService/MConfigServicePutConfigurationRecorderPutConfigurationRecorderRequest.html
2018-03-17T16:52:59
CC-MAIN-2018-13
1521257645248.22
[]
docs.aws.amazon.com
This database view contains the history of scan operations. Table 1. VUMV_ENTITY_SCAN_HISTORY Field Notes SCAN_ID Unique ID generated by the Update Manager server ENTITY_UID Unique ID of the entity the scan was initiated on START_TIME Start time of the scan operation END_TIME End time of the scan operation SCAN_STATUS Result of the scan operation (for example, Success, Failure, or Canceled) FAILURE_REASON Error message describing the reason for failure SCAN_TYPE Type of scan: patch or upgrade TARGET_COMPONENT Target component, such as HOST_GENERAL, VM_GENERAL, VM_TOOLS, VM_HARDWAREVERSION or VA_GENERAL Parent topic: Database Views
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.update_manager.doc/GUID-06F4AD48-7175-4882-8287-FA4767B30ED0.html
2018-03-17T16:54:33
CC-MAIN-2018-13
1521257645248.22
[]
docs.vmware.com
Why do all alternative exons have the indicator "exon-inclusion" in the column "event_call"? Answer: Some users are confused by this column, since for exon arrays, all probesets have the annotation exon-inclusion. This column was added for junction arrays, in which some probesets are supplied with the annotation "mutually-exclusive" in addition to "exon-inclusion". Exon inclusion only indicates that the probeset measures exon-inclusion, but does not indicate whether there is more exon inclusion or exclusion. In most cases, users can ignore this column.
http://altanalyze.readthedocs.io/en/latest/EventCall/
2018-03-17T16:13:45
CC-MAIN-2018-13
1521257645248.22
[]
altanalyze.readthedocs.io
Harmony Server > Installation > Linux > Upgrades > Restoring Backup Files Restoring Backup Files on Linux - Copy the server.ini, Manager.confand any other files you backed up to the new installation: /usr/local/ToonBoomAnimation/harmonyPremium_15.0/etc/
https://docs.toonboom.com/help/harmony-15/premium/server/installation/linux/restore-backup-file-linux.html
2018-03-17T16:23:10
CC-MAIN-2018-13
1521257645248.22
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Consider these guidelines when working with a vSAN stretched cluster. Configure DRS settings for the stretched cluster. DRS must be enabled on the cluster. If you place DRS in partially automated mode, you can control which VMs to migrate to each site. Using vSphere Web Client. Configure the Primary level of failures to tolerate to 1 for stretched clusters. vSAN stretched clusters do not support symmetric multiprocessing fault tolerance (SMP-FT). When a host is disconnected or not responding, you cannot add or remove the witness host. This limitation ensures that vSAN collects enough information from all hosts before initiating reconfiguration operations. Using esxcli to add or remove hosts is not supported for stretched clusters.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-4172337E-E25F-4C6B-945E-01623D314FDA.html
2018-03-17T16:54:29
CC-MAIN-2018-13
1521257645248.22
[]
docs.vmware.com
Tutorial: Review usage and costs Azure Cost Management shows you usage and costs so that you can track trends, detect inefficiencies, and create alerts. All usage and cost data is displayed in Cloudyn dashboards and reports. The examples in this tutorial walk you though reviewing usage and costs using dashboards and reports. In this tutorial, you learn how to: - Track usage and cost trends - Detect usage inefficiencies - Create alerts for unusual spending or overspending If you don't have an Azure subscription, create a free account before you begin. Prerequisites - You must have an Azure account. - You must have either a trial registration or paid subscription for Azure Cost Management. Open the Cloudyn portal You review all usage and costs in the Cloudyn portal. Open the Cloudyn portal from the Azure portal or navigate to and log in. Track usage and cost trends You track actual money spent for usage and costs with Over Time reports to identify trends. To start looking at trends, use the Actual Cost Over Time report. On the reports menu at the top of the portal, click Cost > Cost Analysis > Actual Cost Over Time. When you first open the report, no groups or filters are applied to it. Here is an example report: The report shows all spending over the last 30 days. To view only spending for Azure services, apply the Service group and then filter for all Azure services. The following image shows the filtered services. In the preceding example, less money was spent starting on 2017-08-31 than before. That cost trend continues for the various services for about nine days. Then, additional spending continues as before. However, too many columns can obscure an obvious trend. You can change the report view to a line or area chart to see the data displayed in other views. The following image shows the trend more clearly. In the example, you clearly see that Azure Storage cost dropped starting on 2017-08-31 while spending on other Azure services remained level. So, what caused that reduction in spending? In this example, some employees were on vacation away from work and did not use the Storage service. To watch a tutorial video about tracking usage and cost trends, see Analyzing your cloud billing data vs. time with Azure Cost Management. Detect usage inefficiencies Optimizer reports improve efficiency, optimize usage, and identify ways to save money spent on your cloud resources. They are especially helpful with cost-effective sizing recommendations intended to help reduce idle or expensive VMs. A common problem that affects organizations when they initially move resources in to the cloud is their virtualization strategy. They often use an approach similar to the one they used for creating virtual machines for the on-premises virtualization environment. And, they assume that costs are reduced by moving their on-premises VMs to the cloud, as-is. However, that approach is not likely to reduce costs. The problem is that their existing infrastructure was already paid for. Users could create and keep large VMs running if they liked—idle or not and with little consequence. Moving large or idle VMs to the cloud is likely to increase costs. Cost allocation for resources is important when you enter into agreements with cloud service providers. You must pay for what you commit to whether you use the resource fully or not. The Cost Effective Sizing Recommendations report identifies potential annual savings by comparing VM instance type capacity to their historical CPU and memory usage data. On the reports menu at the top of the portal, click Optimizer > Pricing Optimization > Cost Effective Sizing Recommendations. Filter the provider to Azure to look at only Azure VMs. Here’s an example image. In this example, $3,114 could be saved by following the recommendations to change the VM instance types. Click the plus symbol (+) under Details for the first recommendation. Here are details about the first recommendation. View VM instance IDs by clicking the plus symbol next to List of Candidates. To watch a tutorial video about detecting usage inefficiencies, see Optimizing VM Size in Azure Cost Management. Create alerts for unusual spending You can alert stakeholders automatically for spending anomalies and overspending risks. You can quickly and easily create alerts using reports that support alerts based on budget and cost thresholds. You create an alert for any spending using any Cost report. In this example, use the Actual Cost Over Time report to notify you when Azure VM spending nears your total budget. On the reports menu at the top of the portal, click Cost > Cost Analysis > Actual Cost Over Time. Set Groups to Service and set Filter on the service to Azure/VM. In the top right of the report, click Actions and then select Schedule report. Use the Scheduling tab to send yourself an email of the report using the frequency that you want. Any tags, grouping, and filtering you used are included in the emailed report. Click the Threshold tab and select choose Actual Cost vs. Threshold. If you had a total budget of $500,000 and you wanted notification when costs near about half, create a Red alert at $250,000 and a Yellow alert at $240,000. Then, choose the number of consecutive alerts. When you receive total number of alerts that you specified, no additional alerts are sent. Save the scheduled report. You can also choose the Cost Percentage vs. Budget threshold metric to create alerts. By using that metric, you can use budget percentages instead of currency values. Next steps In this tutorial, you learned how to: - Track usage and cost trends - Detect usage inefficiencies - Create alerts for unusual spending or overspending Advance to the next tutorial to learn how to forecast spending using historical data.
https://docs.microsoft.com/en-us/azure/cost-management/tutorial-review-usage
2018-03-17T16:38:49
CC-MAIN-2018-13
1521257645248.22
[array(['media/tutorial-review-usage/actual-cost01.png', 'example report'], dtype=object) array(['media/tutorial-review-usage/actual-cost02.png', 'filtered services'], dtype=object) array(['media/tutorial-review-usage/actual-cost03.png', 'trend in report'], dtype=object) array(['media/tutorial-review-usage/sizing01.png', 'Azure VMs'], dtype=object) array(['media/tutorial-review-usage/sizing02.png', 'recommendation details'], dtype=object) array(['media/tutorial-review-usage/sizing03.png', 'List of Candidates'], dtype=object) array(['media/tutorial-review-usage/schedule-alert01.png', 'example report'], dtype=object) ]
docs.microsoft.com
Zend\Soap\Server Zend\Soap\Server provides a wrapper around PHP's SoapServer implementation with convenience functionality for generating WSDL and registering internal handlers. It may be used in WSDL or non-WSDL mode, and can map functionality to either PHP classes or functions in order to define your web service API. When in WSDL mode, it uses a prepared WSDL document to define server object behavior and transport layer options. WSDL documents may be auto-generated with functionality provided by the Zend\Soap\AutoDiscover component, or constructed manually using the Zend\Soap\Wsdl class or any other XML generation tool. If the non-WSDL mode is used, then all protocol options must be provided via the options mechanism. Zend\Soap\Server instantiation Instantiation of Server instances varies based on whether or not you are using WSDL mode. Options available in either mode parse_huge(since 2.7.0): when set to a boolean true, ensures the LIBXML_PARSEHUGEflag is passed to DOMDocument::loadXML()when handling an incoming request. This can resolve issues with receiving large payloads. Instantiation for WSDL mode When in WSDL mode, the constructor expects two optional parameters: $wsdl: the URI of a WSDL file. This may be set after-the-fact using $server->setWsdl($wsdl). $options: options to use when creating the instance. These may be set later using $server->setOptions($options). The following options are recognized in the WSDL mode: soap_version( soapVersion) - soap version to use ( SOAP_1_1- equivalent to calling setWsdl($wsdlValue). Instantiation for non-WSDL mode The first constructor parameter must be set to NULL if you plan to use Zend\Soap\Server functionality in non-WSDL mode. You also have to set the uri option in this case (see below). The second constructor parameter, $options, is an array of options for configuring the behavior of the server; these may also be provided later using $server->setOptions($options). Options recognized in non-WSDL mode include: soap_version( soapVersion) - soap version to use ( SOAP_1_1or SOAP_1_2). actor- the actor URI for the server. classmap( classMap) - an associative array used to map WSDL types to PHP classes. The option must be an associative array using WSDL types as the keys, and PHP class names as values. encoding- internal character encoding (UTF-8 is always used as an external encoding). uri(required) - URI namespace for SOAP server. Defining your SOAP API There are two ways to define your SOAP API in order to expose PHP functionality. The first one is to attach a class to the Zend\Soap\Server object that completely describes your::class); // Or bind an instance: $server->setObject(new MyClass()); // Handle a request: $server->handle(); Docblocks are required You should completely describe each method using a method docblock if you plan to use autodiscover functionality to prepare your WSDL. The second method for defining your API is to use one or more functions, passing them to one or more of handling Zend\Soap\Server component performs request/response processing automatically, but allows you to intercept each in order to perform pre- or post-processing. Request pre- and post-processing The Zend\Soap\Server::handle() method handles a request from the standard input stream ('php://input'). It may be overridden either by supplying a request instance to the handle() method, or by setting the request via the setRequest() method: $server = new Zend\Soap\Server(/* ... */); // Set request using optional $request parameter to the handle() method: $server->handle($request); // Set request using setRequest() method: $server->setRequest(); $server->handle(); A request object may be represented using any of the following, and handled as follows: DOMDocument(casts to XML) DOMNode(owner document is retrieved and cast to XML) SimpleXMLElement(casts to XML) stdClass( __toString()is called and verified to be valid XML) string(verified to be valid XML) The last request processed may be retrieved using the getLastRequest() method, which returns the XML string: $server = new Zend\Soap\Server(/* ... */); $server->handle(); $request = $server->getLastRequest(); Response post-processing The Zend\Soap\Server::handle() method automatically emits the generated response to the output stream. It may be blocked using setReturnResponse() with true or false as a parameter. When set to true, handle() will return the generated response instead of emitting it. The returned response will be either an XML string representing the response, or a SoapFault exception instance. Do not return SoapFaults SoapFault instances, when cast to a string, will contain the full exception stack trace. For security purposes, you do not want to return that information. As such, check your return type before emitting the response manually. $server = new Zend\Soap\Server(/* ... */); // Get a response as a return value of handle(), // instead of emitting it to standard output: $server->setReturnResponse(true); $response = $server->handle(); if ($response instanceof SoapFault) { /* ... */ } else { /* ... */ } The last response emitted may also be retrieved for post-processing using getLastResponse(): $server = new Zend\Soap\Server(/* ... */); $server->handle(); $response = $server->getLastResponse(); if ($response instanceof SoapFault) { /* ... */ } else { /* ... */ } Document/Literal WSDL Handling The document/literal binding-style/encoding pattern is used to make SOAP messages as human-readable as possible and allow abstraction between very incompatible languages. The .NET framework uses this pattern for SOAP service generation by default. The central concept of this approach to SOAP is the introduction of a Request and an Response object for every function/method of the SOAP service. The parameters of the function are properties on the request object, and the response object contains a single parameter that is built in the style <methodName>Result zend-soap supports this pattern in both the AutoDiscover and Server components. You can write your service object without knowledge of.
https://docs.zendframework.com/zend-soap/server/
2018-03-17T16:12:59
CC-MAIN-2018-13
1521257645248.22
[]
docs.zendframework.com
Event Log. Event Entry Written Log. Event Entry Written Log. Event Entry Written Log. Event Entry Written Definition Occurs when an entry is written to an event log on the local computer. public: event System::Diagnostics::EntryWrittenEventHandler ^ EntryWritten; public event System.Diagnostics.EntryWrittenEventHandler EntryWritten; member this.EntryWritten : System.Diagnostics.EntryWrittenEventHandler Public Custom Event EntryWritten As EntryWrittenEventHandler Examples The following example handles an entry written event. #using <System.dll> using namespace System; using namespace System::Diagnostics; using namespace System::Threading; ref class MySample { private: // This member is used to wait for events. static AutoResetEvent^ signal; public: static void main() { signal = gcnew AutoResetEvent( false ); EventLog^ myNewLog = gcnew EventLog; myNewLog->Source = "testEventLogEvent"; myNewLog->EntryWritten += gcnew EntryWrittenEventHandler( MyOnEntryWritten ); myNewLog->EnableRaisingEvents = true; myNewLog->WriteEntry("Test message", EventLogEntryType::Information); signal->WaitOne(); } static void MyOnEntryWritten( Object^ /*source*/, EntryWrittenEventArgs^ /*e*/ ) { Console::WriteLine("In event handler"); signal->Set(); } }; int main() { MySample::main(); }(); } } Option Explicit On Option Strict On Imports System Imports System.Diagnostics Imports System.Threading Class MySample ' This member is used to wait for events. Private Shared signal As AutoResetEvent Public Shared Sub Main() signal = New AutoResetEvent(False) Dim myNewLog As New EventLog("Application", ".", "testEventLogEvent") AddHandler myNewLog.EntryWritten, AddressOf MyOnEntryWritten myNewLog.EnableRaisingEvents = True myNewLog.WriteEntry("Test message", EventLogEntryType.Information) signal.WaitOne() End Sub ' Main Public Shared Sub MyOnEntryWritten(ByVal [source] As Object, ByVal e As EntryWrittenEventArgs) Console.WriteLine("In event handler") signal.Set() End Sub ' MyOnEntryWritten End Class ' MySample Remarks. Security EventLogPermission for administering event log information on the computer. Associated enumeration: Administer Applies to See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.eventlog.entrywritten?redirectedfrom=MSDN&view=netframework-4.7.2
2019-02-15T21:24:58
CC-MAIN-2019-09
1550247479159.2
[]
docs.microsoft.com
Distributed File System Replication The Distributed File System Replication (DFSR) service is a state-based, multimaster replication engine that supports replication scheduling and bandwidth throttling. DFSR uses a compression algorithm known as remote differential compression (RDC). RDC is a "diff-over-the wire" client/server protocol that can be used to efficiently update files over a limited-bandwidth network. RDC detects insertions, removals, and rearrangements of data in files, enabling DFSR to replicate only the changed file blocks when files are updated. For more information about DFSR, see Introduction to DFS Replication. You can use the DFSR WMI provider to create tools for configuring and monitoring the DFSR service. For more information, see the following topics: Run-Time Requirements DFSR is supported on Windows server operating systems. It is supported on Windows Vista, but it is not supported on any other Windows client operating systems. For information about run-time requirements for a particular programming element, see the Requirements section of the documentation for that element. Related topics
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/dfsr/distributed-file-system-replication--dfsr-
2019-02-15T21:00:54
CC-MAIN-2019-09
1550247479159.2
[]
docs.microsoft.com
End Of Life¶ Description¶ Each release of Fedora is maintained as laid out in the maintenance schedule. At the conclusion of the maintenance period, a Fedora release enters end of life status. This procedure describes the tasks necessary to move a release to that status. Actions¶ Set date¶ - Releng responsibilities: - Follow guidelines of maintenance schedule - Take into account any infrastructure or other supporting project resource contention - Announce the closure of the release to the package maintainers. Reminder announcement¶ - from rel-eng to f-devel-announce, f-announce-l, including - date of last update push (if needed) - date of actual EOL Koji tasks¶ disable builds by removing targets koji remove-target f19 koji remove-target f19-updates-candidate Purge from disk the signed copies of rpms that are signed with the EOL’d release key Bodhi tasks¶ Run the following end of life script from bodhi backend bodhi-manage-releases edit --name F21 --state archived PackageDB¶ Set the release to be End of Life in the PackageDB. A admin can login and do this from the web interface. Source Control (git)¶ - Branches for new packages in git are not allowed for distribution X after the Fedora X+2 release. New builds are no longer allowed for EOL Fedora releases. Fedora Program Manager Tasks¶ - Close all open bugs - End of Life Process Bugzilla¶ - Update the description of Fedora in bugzilla for the current releases. - Get someone from sysadmin-main to login as the [email protected] user to bugzilla. - Have them edit the description of the Fedora product here: Badges tasks¶ Update the cold undead hands badge. - In order to do this, you need to be in the sysadmin-badges group and the gitbadges group. If you’re not, just email those two groups at [email protected] and [email protected]. Tell them that they need to update this badge and point them to these instructions. - Clone the repo with `` $ git clone ssh://[email protected]/fedora-badges-assets.git`` - Edit rules/you-can-pry-it-from-my-cold-undead-hands.yml and add the EOL release to the list in the trigger section on line 19. - Push that back to fedorahosted. - Push the rule change out live to our servers by logging into batcave and running the manual/push-badges.yml playbook. - All done. Cloud tasks¶ Note FIXME: This needs updating, I’m pretty sure we need to do something with fedimg here - Remove unsupported EC2 images from Taskotron tasks¶ File Taskotron ticket and ask for the EOL’d release support to be removed. Final announcement¶ - from releng to f-announce-l - on EOL date if at all possible - link to previous reminder announcement (use HTTPS) Announcement content¶ As of the <eol_date>, Fedora X has reached its end of life for updates and support. No further updates, including security updates, will be available for Fedora X. A previous reminder was sent on <announcement_daet> [0]. Fedora X+1 will continue to receive updates until approximately one month after the release of Fedora X+3. The maintenance schedule of Fedora releases is documented on the Fedora Project wiki [1]. The Fedora Project wiki also contains instructions [2] on how to upgrade from a previous release of Fedora to a version receiving updates. <your_name>. [0]<url to the announcement from [email protected] list> [1] [2] Note All dates should follow xxth of month year format.(Example: 19th of July 2016) Update eol wiki page¶ update with release and number of days.
https://docs.pagure.org/releng/sop_end_of_life.html
2019-02-15T22:01:24
CC-MAIN-2019-09
1550247479159.2
[]
docs.pagure.org
Manage Service Plans This topic describes how Pivotal Cloud Foundry (PCF) Administrators manage Single Sign-On service plans. Single Sign-On Single Sign-On your Pivotal Elastic Runtime tile in Ops Manager under the Credentials tab. Click New Plan on the SSO dashboard to create a new Single Sign-On. Configure a Token Policy..
https://docs.pivotal.io/p-identity/1-5/manage-service-plans.html
2019-02-15T20:48:23
CC-MAIN-2019-09
1550247479159.2
[]
docs.pivotal.io
Polyaxon CLI is a tool and a client to interact with Polyaxon, it allows you to manage your cluster, users, projects, and experiments. Installation To install Polyaxon CLI please refer to the installation documentation. To get help from The Polyaxon CLI, you can run the following $ polyaxon --help To get help for any Polyaxon CLI Command, you can run the following $ polyaxon command --help Commands References - Auth - Check - Config - User - Superuser role - Init - Project - Upload - Run - Experiment Group - Experiment - Job - Build - Dashboard - Tensorboard - Notebook - Cluster - Bookmark - Version Caching When using the Polyaxon CLI to run a command requiring a project, group, experiment, and/or a job, you can always specify the values for these options, example: $ polyaxon project --project=user_1/project_10 get $ polyaxon experiment --project=user_1/project_10 --experiment=2 get $ polyaxon experiment --project=user_1/project_10 --experiment=3 --job=2 logs $ polyaxon group --project=user_1/project_10 --group=2 experiments Polyaxon CLI allows also you to omit these options, i.e. project, experiment group, experiment, and job, the CLI does the following: - When a username is missing, the username of the logged-in user is used. - When a project name is missing, the name of the currently initialized project is used. - When an experiment group, experiment, or job is missing, the last value is used. - If no values are found, the CLI will show an error. Some commands with caching: $ polyaxon project get $ polyaxon experiment get $ polyaxon group experiments $ polyaxon job logs - ... Switching context Users don't have to change to a new project to access information about that project, and its jobs, builds, experiments, groups, tensorboards, and notebooks. All commands allow to change the project context by providing -p project or --project=project. If you are an admin you can as well check other users projects without initializing the projects, -p user/project or --project=user/project. Here are some examples: Getting other projects experiments: polyaxon project -p mnist experiments -s "-created_at" polyaxon project --project=adam/mnist experiments -q "status: failed" Getting tensorboards for some projects: polyaxon project --project=mnist tensorboards --sort="-created_at" polyaxon project -p adam/mnist tensorboards --query="status: running" Getting information about a specific experiment: polyaxon experiment -p mnist -xp 13 get polyaxon experiment -p adam/mnist --experiment=13 get Getting information about a specific build: polyaxon build -p mnist -b 113 get polyaxon build -p adam/mnist --build=13 get
https://docs.polyaxon.com/references/polyaxon-cli/
2019-02-15T21:58:59
CC-MAIN-2019-09
1550247479159.2
[]
docs.polyaxon.com
Hello, I found an another bug in your plugin. When I set a coupon with any value, the signup form accept it, but I cannot pay via PayPal. I get this below error message on Dashboard: There was an error when trying to contact PayPal. Contact the Network admin. Item amount is invalid. Code: 1 I tried to use $2 coupon and 98% discount coupons as well, but none of them works. Please investigate this issue. Thank you!2 months, 2 weeks ago goldenticketParticipant I remember reading about Code 1 on a different thread: If that doesn’t work also checkout:–1-ts1221 I don’t know if this applies to your individual situation or not. I hope this helps.2 months, 1 week ago Arindo DuqueKeymaster Hey, @feriman. I was able to replicate this issue locally, but it happening very sporadically. I’ll see what I can do to fix it and send a patch to you over here. Would you be able to send me the contents of your WP Ultimo -> System Info page? Kind regards, - This reply was modified 2 months, 1 week ago by Arindo Duque. Bugreport, which one is related for this topic: While register, click on “Have a coupon code?”, enter something invalid, remove the check from box and press “Create Account” button. It will return with “The coupon code you entered is not valid or is expired.” error message. Please fix this issue as well. Thanks! 🙂1 month, 4 weeks ago Arindo DuqueKeymaster That’s not a bug per se, is just something that we haven’t implemented yet. I do know that it makes sense, though. Right now, the only thing a coupon code link does is to pre-fill the coupon code field on the last step. Kind regards, Hi @Arindo Duque, Thank for reply! Will you implement this function in your plugin? If yes, can you add it to roadmap? Thanks! 🙂1 month, 3 weeks ago Arindo DuqueKeymaster We are doing it right now. We will be moving to a new roadmap platform soon, so I’m not sure if I’ll add it to the current one. Kind regards, You must be logged in to reply to this topic.
https://docs.wpultimo.com/community/topic/coupon-code-is-not-working/
2019-02-15T20:50:01
CC-MAIN-2019-09
1550247479159.2
[]
docs.wpultimo.com
Configuring Affinity between vSmart and vEdge Devices One way to manage network scale is to configure affinity between vSmart controllers and vEdge routers. To do this, you place each vSmart controller into a controller group, and then you configure which group or groups a vEdge router can establish control connections with. The controller groups are what establishes the affinity between vSmart controllers and vEdge routers. Configure the Controller Group Identifier on vSmart Controllers To participate in affinity, each vSmart controller must be assigned a controller group identifier: vSmart(config#) system controller-group-id number The identifier number can be from 0 through 100. When vSmart controllers are in different data centers, it is recommended that you assign different controller group identifiers to the vSmart controllers. Doing this provides redundancy among data centers, in case a data center becomes unreachable. For vSmart controllers in the same a data center, they can have the same controller group identifier or different identifiers: - If the vSmart controllers have the same controller group identifier, a vEdge router establishes a control connection to any one of them. If that vSmart controller becomes unreachable, the router simply establishes a control connection with another one of the controllers in the data center. As an example of how this might work, if one vSmart controller becomes unavailable during a software upgrade, the vEdge router immediately establishes a new TLOC with another vSmart controller, and the router's network operation is not interrupted. This network design provides redundancy among vSmart controllers in a data center. - If the vSmart controllers have different controller group identifiers, a vEdge router can use one controller as the preferred and the other as backup. As an example of how this might work, if you are upgrading the vSmart controller software, you can upgrade one controller group at a time. If a problem occurs with the upgrade, a vEdge router establishes TLOCs with the vSmart controllers in the second, backup controller group, and the router's network operation is not interrupted. When the vSmart controller in the first group again become available, the vEdge router switches its TLOCs back to that controller. This network design, while offerring redundancy among the vSmart controllers in a data center, also provides additional fault isolation. Configure Affinity on vEdge Routers For a vEdge router to participate in affinity, you configure the vSmart controllers that the router is allowed to establish control connections with, and you configure the maximum number of control connections (or TLOCs) that the vEdge router itself, and that an individual tunnel on the router, is allowed to establish. Configure a Controller Group List Configuring the vSmart controllers that the router is allowed to establish control connections is a two-part process: - At the system level, configure a single list of all the controller group identifiers that are present in the overlay network. - For each tunnel interface in VPN 0, you can choose to restrict which controller group identifiers the tunnel interface can establish control connections with. To do this, configure an exclusion list. At a system level, configure the identifiers of the vSmart controller groups: vEdge(config)# system controller-group-list numbers List the vSmart controller group identifiers that any of the tunnel connections on the vEdge router might want to establish control connections with. It is recommended that this list contain the identifiers for all the vSmart controller groups in the overlay network. If, for a specific tunnel interface in VPN 0, you want it to establish control connections to only a subset of all the vSmart controller groups, configure the group identifiers to exclude: vEdge(config-vpn-0-interface)# tunnel-interface exclude-controller-group-list numbers In this command, list the identifiers of the vSmart controller groups that this particular tunnel interface should never establish control connections with. The controller groups in this list must be a subset of the controller groups configured with the system controller-group-list command. To display the controller groups configured on a vEdge router, use the show control connections command. Configure the Maximum Number of Control Connections Configuring the maximum number of control connections for the vEdge router is a two-part process: - At the system level, configure that maximum number of control connections that the vEdge router can establish to vSmart controllers. - For each tunnel interface in VPN 0, configure the maximum number of control connections that the tunnel can establish to vSmart controllers. By default, a vEdge router can establish two OMP sessions for control connections to vSmart controllers. To modify the maximum number of OMP sessions: vEdge(config)# system max-omp-sessions number The number of OMP sessions can be from 0 through 100. A vEdge router establishes OMP sessions as follows: - Each DTLS and and each TLS control plane tunnel creates a separate OMP session. - It is the vEdge router as a whole, not the individual tunnel interfaces in VPN 0, that establishes OMP sessions with vSmart controllers. When different tunnel interfaces on the router have affinity with the same vSmart controller group, the vEdge router creates a single OMP session to one of the vSmart controllers in that group, and the different tunnel interfaces use this single OMP session. By default, each tunnel interface in VPN 0 can establish two control connections. To change this: vEdge(config-vpn-0-interface)# vpn 0 interface interface-name tunnel-interface max-control-connections number The number of control connections can be from 0 through 100. The default value is the maximum number of OMP sessions configured with the system max-omp-sessions command. When a vEdge routers has multiple WAN transport connections, and hence has multiple tunnel interfaces in VPN 0, the sum of the maximum number of control connections that all the tunnels can establish cannot exceed the maximum number allowed on the router itself. To display the maximum number of control connections configured on an interface, use the show control local-properties command. To display the actual number of control connections for each tunnel interface, use the show control affinity config command. To display a list of the vSmart controllers that each tunnel interface has established control connections with, use the show control affinity status command. Best Practices for Configuring Affinity - In the system controller-group-list command on the vEdge router, list all the controller groups available in the overlay network. Doing so ensures that all the vSmart controllers in the overlay network are available for the affinity configuration, and it provides additional redundancy in case connectivity to the preferred group or groups is lost. You manipulate the number of control connections and their priority based on the maximum number of OMP sessions for the router, the maximum number of control connections for the tunnel, the controller groups a tunnel should not use. A case in which listing all the controller groups in the system controller-group-list command provides additional redundancy is when the vEdge router site is having connectivity issues in reaching the vSmart controllers in the controller group list. To illustrate this, suppose, in a network with three controller groups (1, 2, and 3), the controller group list on a vEdge router contains only groups 1 and 2, because these are the preferred groups. If the router learns from the vBond controller that the vSmart controllers in groups 1 and 2 are up, but the router is having connectivity issues to both sites, the router loses its connectivity to the overlay network. However, if the controller group list contains all three controller groups, even though group 3 is not a preferred group, if the router is unable to connect to the vSmart controllers in group 1 or group 2, it is able to fall back and connect to the controllers in group 3. Configuring affinity and the order in which to connect to vSmart controllers is only a preference. The preference is honored whenever possible. However, the overarching rule in enforcing high availability on the overlay network is to use any operational vSmart controller. The network ceases to function only when no vSmart controllers are operational. So it might happen that the least preferred vSmart controller is used if it is the only controller operational in the network at a particular time. When a vEdge router boots, it learns about all the vSmart controllers in the overlay network, and the vBond orchestrator is continuously communicating to the router which vSmart controllers are up. So, if a vEdge router cannot reach any of the preferred vSmart controllers in the configured controller group and another vSmart controller is up, the router connects to that controller. Put another way, in a network with multiple vSmart controllers, as a last resort, a vEdge router connects to any of the controllers, to ensure that the overlay network remains operational, whether or not these controllers are configured in the router's controller group list. - The controller groups listed in the exclude-controller-group-list command must be a subset of the controller groups configured for the entire router, in the system controller-group-list command. - When a data center has multiple vSmart controllers that use the same controller group identifier, and when the overlay network has two or more data centers, it is recommended that the number of vSmart controllers in each of the controller groups be the same. For example, if Data Center 1 has three vSmart controllers, all with the same group identifier (let's say, 1), Data Center 2 should also have three vSmart controllers, all with the same group identifier (let's say, 2), and any additional data centers should also have three vSmart controllers. - When a data center has vSmart controllers in the same controller group, the hardware capabilities—specifically, the memory and CPU—on all the vSmart controllers should be identical. More broadly, all the vSmart controllers in the overlay network, whether in one data center or in many, should have the same hardware capabilities. Each vSmart controller should have equal capacity and capability to handle a control connection from any of the vEdge routers in the network. - When a router has two tunnel connections and the network has two (or more) data centers, it is recommended that you configure one of the tunnel interfaces to go to one of the data centers and the other to go to the second. This configuration provides vSmart redundancy with the minimum number of OMP sessions. - Whenever possible in your network design, you should leverage affinity configurations to create fault-isolation domains. Additional Information High Availability Configuration Examples High Availability Overview
https://sdwan-docs.cisco.com/Product_Documentation/Software_Features/Release_17.2/09High_Availability_and_Scaling/03Configuring_Affinity_between_vSmart_and_vEdge_Devices
2019-02-15T22:06:14
CC-MAIN-2019-09
1550247479159.2
[]
sdwan-docs.cisco.com
Command Line Interface (CLI) Develop, deploy and operate upgradeable smart contract projects. Support for Ethereum and every other EVM-powered blockchain. Interactive commands: Send transactions, query balances, and interact with your contracts directly from the command line, using commands like oz send-tx, oz call, oz balance, and oz transfer. Deploy & upgrade your contracts: You can develop your smart contracts iteratively, speeding up development locally, or squashing bugs in production. Run oz deployto deploy your contracts, followed by oz upgradeany time you want to change their code. Link Ethereum Packages: Use code from contracts already deployed to the blockchain directly on your project, saving gas on deployments and managing your dependencies securely, just with an oz linkcommand. Bootstrap your dapp: Jumpstart your dapp by unpacking one of our starter kits, pre-configured with OpenZeppelin Contracts, React, and Infura. Run oz unpackto start! Overview Usage All CLI commands are fully interactive: you can call them with no or incomplete arguments and they will prompt you for options as they proceed. Below is a short list of the most used commands: oz init: initialize your OpenZeppelin project oz compile: compile all Solidity smart contracts in your project oz deploy: deploy an upgradeable smart contract oz send-tx: send a transaction to a contract and execute a function oz call: read data from the blockchain by calling viewand purefunctions oz upgrade: upgrade a deployed contract to a new version without changing the address or state oz unpack: bootstrap a project with a Starter Kit oz link: reuse on-chain code by to a linking to Ethereum Packages Learn More Head to Getting Started to see the CLI in action by deploying and upgrading a smart contract! Using Dependencies showcases a more complex project being built, including leveraging the OpenZeppelin Contracts library. If you are a Truffle user, go to Using With Truffle for information on using both tools on the same project. Take a look at the API reference for all CLI commands. For an overview of the internals of the CLI, you can read on the Contracts Architecture and different Configuration Files.
https://docs.openzeppelin.com/cli/2.8/
2021-01-16T02:39:43
CC-MAIN-2021-04
1610703499999.6
[]
docs.openzeppelin.com
How do I use real-time targeting? Dynamic content delivery Example scenario: Upselling a user after conversion A user has purchased a virtual item in your in-app store and you’d like to deliver a unique sale to the converted customer on entering the store again in the same session. To do this, first create an in-app messaging campaign in Swrve, defining the message trigger at store entry and targeting the message at the appropriate user segment. For more information about creating in-app messages, see Creating in-app messages. When users in the segment enter the store, the in-app message is triggered and they see the campaign message. Next, work with your development team to send Swrve any queued events on user purchase and request campaigns from Swrve (in the client code). Now, within the same user session, the user receives a unique message on an offer of your choice. For example, you could deliver the following: - In-app interstitials based on segmentation – you can target engaged spenders while they’re in the app. Upsell to users by offering a unique sale after purchase. Deliver special sale offers on every new in-app achievement or deliver an offer immediately to those who share your app with Facebook friends. - A/B tests – provide different prices to those who have passed a certain stage in the app. Offer a completely new IAP store on level promotion or app session length. Real-time segments enable you to be active in delivering content to users as they use your app. Improve reengagement by changing the difficulty levels for players unable to pass certain levels or promote a different level path or different achievement goals based on how a user is progressing. Development changes for dynamic content delivery For information about the development changes required to use real-time targeting, consult the developer documentation below for the platform of your app (requires SDK version 3.0 or higher):
https://docs.swrve.com/user-documentation/segmentation/segmentation-faq/real-time-targeting/
2021-01-16T02:50:02
CC-MAIN-2021-04
1610703499999.6
[]
docs.swrve.com
Breaking: #64190 - FormEngine Checkbox Element limitation of cols setting¶ See Issue #64190 Description¶ The TCA configuration for checkbox cols has been changed. We reduced the number of accepted values to 1, 2, 3, 4 and 6 to provide a responsive experience. For usecases like checkboxes for weekdays like mo, tu, we, th, fr, sa, su we introduced a new value inline. Affected installations¶ Installations with TCA column configurations for checkboxes with values equals 5 or above 6.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.1/Breaking-64190-FormEngineCheckboxElement.html
2021-01-16T02:50:57
CC-MAIN-2021-04
1610703499999.6
[]
docs.typo3.org