output
stringlengths
7
3.46k
input
stringclasses
1 value
instruction
stringlengths
129
114k
Disclosed are systems, methods, circuits and associated computer executable code for deep learning based natural language understanding, wherein training of one or more neural networks, includes: producing character strings inputs ‘noise’ on a per-character basis, and introducing the produced ‘noise’ into machine training character strings inputs fed to a ‘word tokenization and spelling correction language-model’, to generate spell corrected word sets outputs; feeding machine training word sets inputs, including one or more ‘right’ examples of correctly semantically-tagged word sets, to a ‘word semantics derivation model’, to generate semantically tagged sentences outputs. Upon models reaching a training ‘steady state’, the ‘word tokenization and spelling correction language-model’ is fed with input character strings representing ‘real’ linguistic user inputs, generating word sets outputs that are fed as inputs to the word semantics derivation model for generating semantically tagged sentences outputs.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A neural network based system for spell correction and tokenization of natural language, said system comprising: An artificial neural network architecture, to generate variable length ‘character level output streams’ for system fed variable length ‘character level input streams’; An auto-encoder for injecting random character level modifications to the variable length ‘character level input streams’, wherein the characters include a space-between-token character; and An unsupervised training mechanism for adjusting said neural network to learn correct variable length ‘character level output streams’, wherein correct variable length ‘character level output streams’ needs to be similar to respective original variable length ‘character level input streams’ prior to their random character level modifications. 2. The system according to claim 1 wherein the random character level modifications are selected from the group consisting of adding random characters, deleting characters, transposing characters and replacing characters. 3. The system according to claim 2 wherein said neural network is implemented using a sequence to sequence artificial neural network architecture, sequences of the variable length ‘character level input streams’ are mapped to a hidden state, and sequences of the variable length ‘character level output streams’ are generated from the hidden state. 4. The system according to claim 3 wherein the sequence to sequence artificial neural network architecture is implemented using a bidirectional long short-term memory (LSTM) input layer. 5. The system according to claim 4 wherein the variable length ‘character level input streams’ are Unicode character streams, and further comprising a UTF-8 encoder for applying UTF-8 encoding to the Unicode character streams prior to their inputting to said neural network. 6. The system according to claim 5 wherein said unsupervised training mechanism is further adapted for adjusting said neural network to learn a per-character embedding representation of the variable length ‘character level input streams’, in parallel to the learning of correct variable length ‘character level output streams’ 7. The system according to claim 2 further comprising a random modification selector for randomly selecting the character level modifications from the group. 8. The system according to claim 7, wherein said auto-encoder is further adapted for incrementing the frequency of injecting the random character level modifications to the variable length ‘character level input streams’, responsive to an increase in the level of similarity of the variable length ‘character level output streams’ to the respective original variable length ‘character level input streams’ prior to their random character level modifications. 9. The system according to claim 1, wherein at least some of the variable length ‘character level input streams’, fed to the system represent dialogs, and dialog metadata is at least partially utilized by said artificial neural network to generate the variable length ‘character level output streams’. 10. The system according to claim 9, wherein dialog metadata at least partially includes dialog state data. 11. A neural network based system for semantic role assignment of dialog utterances, said system comprising: An artificial recurrent neural network architecture, implemented using long short-term memory (LSTM) cells, to generate variable length ‘tagged tokens output streams’ for system fed variable length ‘dialog utterance input streams’; and A weakly supervised training mechanism for feeding to said artificial recurrent neural network, one or more variable length ‘dialog utterance input streams’ with their respective correctly-tagged variable length ‘tagged tokens output streams’, as initial input training data, and for adjusting said recurrent neural network to learn correct variable length ‘tagged tokens output streams’, by generating, and suggesting for system curator tagging correctness feedback—additional variable length ‘dialog utterance input streams’ with their respective variable length ‘tagged tokens output streams’ as tagged by said recurrent neural network—wherein correct tagging of the suggested additional variable length ‘dialog utterance input streams’ improves the capability of said recurrent neural network to refine the decision boundaries between correctly and incorrectly tagged inputs and to more correctly tag following system fed variable length ‘dialog utterance input streams’. 12. The system according to claim 11, wherein at least some of the variable length ‘dialog utterance input streams’, fed to the system represent dialogs, and dialog metadata is at least partially utilized by said artificial recurrent neural network to generate the variable length ‘tagged tokens output streams’. 13. The system according to claim 12, wherein dialog metadata at least partially includes dialog state data. 14. The system according to claim 11, wherein said weakly supervised training mechanism is further adapted to modify the variable length ‘tagged tokens output stream’ of a specific given incorrectly labeled variable length ‘dialog utterance input stream’, without retraining of the entire said recurrent neural network, by reiterating the variable length ‘dialog utterance input stream’ and applying gradient learning with a low learning rate across multiple training epochs. 15. The system according to claim 11, wherein said weakly supervised training mechanism is further adapted to self-improve while actively handling real end-user variable length ‘dialog utterance input streams’ by utilizing under-utilized Central Processing Unit (CPU) cycles of its hosting computer to run additional epochs of training. 16. The system according to claim 1, further comprising: An artificial recurrent neural network architecture, implemented using long short-term memory (LSTM) cells, to generate variable length ‘tagged tokens output streams’ for system fed variable length ‘dialog utterance input streams’; and A weakly supervised training mechanism for feeding to said artificial recurrent neural network, one or more variable length ‘dialog utterance input streams’ with their respective correctly-tagged variable length ‘tagged tokens output streams’, as initial input training data, and for adjusting said recurrent neural network to learn correct variable length ‘tagged tokens output streams’, by generating, and suggesting for system curator tagging correctness feedback—additional variable length ‘dialog utterance input streams’ with their respective variable length ‘tagged tokens output streams’ as tagged by said recurrent neural network—wherein correct tagging of the suggested additional variable length ‘dialog utterance input streams’ improves the capability of said recurrent neural network to refine the decision boundaries between correctly and incorrectly tagged inputs and to more correctly tag following system fed variable length ‘dialog utterance input streams’; and wherein variable length ‘character level output streams’ generated by said artificial neural network for variable length ‘character level input streams’, are fed as variable length ‘dialog utterance input streams’ to said artificial recurrent neural network. 17. A method for spell correction and tokenization of natural language, said method comprising: feeding variable length ‘character level input streams’ to an artificial neural network architecture, to generate variable length ‘character level output streams’; injecting random character level modifications to the variable length ‘character level input streams’, wherein the characters include a space-between-token character; and adjusting the neural network to learn correct variable length ‘character level output streams’, wherein correct variable length ‘character level output streams’ needs to be similar to respective original variable length ‘character level input streams’ prior to their random character level modifications. 18. The method according to claim 17 wherein the random character level modifications are selected from the group consisting of adding random characters, deleting characters, transposing characters and replacing characters. 19. A method for semantic role assignment of dialog utterances, said method comprising: feeding variable length ‘dialog utterance input streams’ to an artificial recurrent neural network architecture, to generate variable length ‘dialog utterance input streams’; feeding to the artificial recurrent neural network, one or more variable length ‘dialog utterance input streams’ with their respective correctly-tagged variable length ‘tagged tokens output streams’, as initial input training data; and adjusting the recurrent neural network to learn correct variable length ‘tagged tokens output streams’, by generating, and suggesting for system curator tagging correctness feedback—additional variable length ‘dialog utterance input streams’ with their respective variable length ‘tagged tokens output streams’ as tagged by the recurrent neural network—wherein correct tagging of the suggested additional variable length ‘dialog utterance input streams’ improves the capability of the recurrent neural network to refine the decision boundaries between correctly and incorrectly tagged inputs and to more correctly tag following fed variable length ‘dialog utterance input streams’. 20. The method according to claim 19, wherein the weakly supervised training mechanism is further adapted to further comprising modifying the variable length ‘tagged tokens output stream’ of a specific given incorrectly labeled variable length ‘dialog utterance input stream’, without retraining of the entire recurrent neural network, by reiterating the variable length ‘dialog utterance input stream’ and applying gradient learning with a low learning rate across multiple training epochs.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosed are systems, methods, circuits and associated computer executable code for deep learning based natural language understanding, wherein training of one or more neural networks, includes: producing character strings inputs ‘noise’ on a per-character basis, and introducing the produced ‘noise’ into machine training character strings inputs fed to a ‘word tokenization and spelling correction language-model’, to generate spell corrected word sets outputs; feeding machine training word sets inputs, including one or more ‘right’ examples of correctly semantically-tagged word sets, to a ‘word semantics derivation model’, to generate semantically tagged sentences outputs. Upon models reaching a training ‘steady state’, the ‘word tokenization and spelling correction language-model’ is fed with input character strings representing ‘real’ linguistic user inputs, generating word sets outputs that are fed as inputs to the word semantics derivation model for generating semantically tagged sentences outputs.
G06N3088
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosed are systems, methods, circuits and associated computer executable code for deep learning based natural language understanding, wherein training of one or more neural networks, includes: producing character strings inputs ‘noise’ on a per-character basis, and introducing the produced ‘noise’ into machine training character strings inputs fed to a ‘word tokenization and spelling correction language-model’, to generate spell corrected word sets outputs; feeding machine training word sets inputs, including one or more ‘right’ examples of correctly semantically-tagged word sets, to a ‘word semantics derivation model’, to generate semantically tagged sentences outputs. Upon models reaching a training ‘steady state’, the ‘word tokenization and spelling correction language-model’ is fed with input character strings representing ‘real’ linguistic user inputs, generating word sets outputs that are fed as inputs to the word semantics derivation model for generating semantically tagged sentences outputs.
Methods and systems are described for spectral decomposition of composite solid-state spin environments through quantum control of electronic spin impurities. Δ sequence of spin-control modulation pulses are applied to the electronic spin impurities in the solid-state spin systems. The spectral content of the spin bath that surrounds the electronic spin impurities within the solid-state spin system is extracted, by measuring the coherent evolution and associated decoherence of the spin impurities as a function of number of the applied modulation pulses, and the time-spacing between the pulses. Using these methods, fundamental properties of the spin environment such as the correlation times and the coupling strengths for both electronic and nuclear spins in the spin bath, can be determined.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: applying a sequence of spin-control modulation pulses to electronic spin impurities in a solid-state spin system; and extracting a spectral content of a spin bath that surrounds the electronic spin impurities within the solid-state spin system, by measuring the coherent evolution and associated decoherence of the spin impurities as a function of number of the applied modulation pulses, and the time-spacing between the pulses. 2. The method of claim 1, wherein the act of measuring the coherent evolution and associated decoherence of the spin impurities comprises: defining a time-dependent coherence function C(t)=eχ(t) to represent the coherence of spin impurities within the solid-state spin system, where χ(t) is a decoherence functional that describes the decoherence of the spin impurities as a function of time; and measuring the time-dependent coherence function C(t)=e−χ(t) so as to extract a spectral component S(ω0) of the composite solid-state spin system at the frequency ω0. 3. The method of claim 2, wherein the modulation pulse sequence has a modulation waveform described in a frequency domain by a filter function Ft(ω) that is mathematically related to the decoherence functional by: χ  ( t ) = 1 π  ∫ 0 ∞   ω   S  ( ω )  F t  ( ω ) ω 2 , where S(ω) is a spectral function describing coupling of the spin impurities to a spin bath environment of the composite solid-state spin system. 4. The method of claim 1, wherein the act of extracting the spectral content at a desired frequency ω0 comprises subjecting the spin impurities to a spectral δ-function modulation, with an ideal filter function Ft(ω) with a Dirac delta function localized at ω=ω0, so that the spectral content of the spin bath at the desired frequency ω0 is given by S(ω0)=πln(C(t))/t and the ideal filter function Ft(ω) is mathematically represented by: Ft(ω)/(ωLt)=δ(ω−ω0) 5. The method of claim 4, further comprising repeating, for a number of different frequencies ω=ωi, i=1 . . . n, the acts of subjecting the spin impurities to spectral S-function modulations with the Dirac delta function localized at each frequency co, so as to extract the spectral content S(ω) at all of the different frequencies ω=ωi, i=1 . . . n to obtain a broad range of spectral decomposition for the spin bath. 6. The method of claim 3, further comprising: approximating the delta function in the filter function TWO at a frequency slightly different from am, then extracting a spectral component S(ω0) of the composite solid-state spin system at the slightly different frequency. 7. The method of claim 6, wherein the modulation pulse sequence is an n-pulse CPMG sequence; and wherein a mathematical formula for the filter function for the n-pulse CPMG sequence is: F n CPMG  ( ω   t ) = 8  sin 2  ( ω   t 2 )  sin 4  ( ω   t 4  n ) cos 2  ( ω   t 2  n ) . 8. The method of claim 7, wherein the modulation pulse sequence is an n-pulse XY sequence. 9. The method of claim 1, wherein the solid state system is a diamond crystal, the spin impurities are NV centers in the diamond crystal. 10. The method of claim 9, wherein the spin bath environment in the diamond crystal is dominated by fluctuating N(nitrogen atom) electronic spin impurities so as to cause decoherence of the NV centers through magnetic dipolar interactions. 11. The method of claim 10, wherein the N spins of the spin bath are randomly oriented, and wherein the act of extracting the spectral content of the spin bath comprises extracting a Lorentzian spectrum of the N spin bath's coupling to the NV centers, given by: S  ( ω ) = Δ 2  τ C π  1 1 + ( ω   τ C ) 2 , where Δ is the average coupling strength of the N bath to the NV spin impurities, and where τc is the correlation time of the N bath spins with each other. 12. The method of claim 11, further comprising the act of determining the values of A and Tc from the extracted spectrum S(ω). 13. A system comprising: a microwave pulse generator configured to generate a sequence of spin-control modulation pulses and to apply the pulses to a sample containing electronic spin impurities in a solid-state spin system; and a processing system configured to measure the coherent evolution and associated decoherence of the electronic spin impurities as a function of the number of the applied pulses and the time-spacing between the pulses, so as to extract a spectral content of a spin bath that surrounds the electronic spin impurities within the solid-state spin system. 14. The system of claim 13, wherein the electronic spin impurities comprise NV (nitrogen-vacancy) centers, and wherein the solid-state spin system comprises a diamond crystal. 15. The system of claim 13, wherein the spin-bath environment comprises 13C nuclear spin impurities and N electronic spin impurities within the diamond crystal. 16. The system of claim 13, further comprising an optical system, including an optical source configured to generate excitation optical pulses that initialize and read out the spin states of the spin impurities, when applied to the sample. 17. The system of claim 16, wherein the optical source is a laser tunable to a frequency of about 532 nm. 18. The system of claim 16, wherein the processing system comprises a computer-controlled digital delay generator coupled to the optical source and the microwave source and configured to control the timing of the microwave pulses and the optical pulses. 19. The system of claim 16, further comprising a detector configured to detect output radiation from the NV centers after the microwave pulses and the optical pulses have been applied thereto. 20. The system of claim 16, wherein the optical system further comprises an acousto-optic modulator configured to time the optical pulses so as to prepare and read out the NV spin states. 21. The system of claim 19, wherein the optical system further includes at least one of: a dichroic filter configured to separate fluorescent radiation generated by the NV centers in response the excitation optical pulses; and an objective configured to collect the fluorescent radiation generated by the NV centers in response to the excitation optical pulses and directed the collected fluorescence to the detector. 22. The system of claim 13, wherein the solid state system is a diamond crystal, the spin impurities are NV centers in the diamond crystal, and the spin bath environment in the diamond crystal is dominated by fluctuating N(nitrogen atom) electronic spin impurities, so that the spectrum of the N spin bath's coupling to the NV centers is a Lorentzian spectrum given by: S  ( ω ) = Δ 2  τ C π  1 1 + ( ωτ C ) 2 . where Δ is the average coupling strength of the N bath to the NV spin impurities, and where τc is the correlation time of the N bath spins with each other. 23. The system of claim 22, wherein the processing system is further configured to determine the values of Δ and τc from the extracted spectrum S(ω). 24. The system of claim 13, wherein the electronic spin impurities comprise phosphorus donors, and wherein the solid-state spin system comprises silicon. 25. The system of claim 13, wherein the modulation pulse sequence comprises at least one of: an n-pulse CPMG sequence; and an n-pulse XY sequence.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods and systems are described for spectral decomposition of composite solid-state spin environments through quantum control of electronic spin impurities. Δ sequence of spin-control modulation pulses are applied to the electronic spin impurities in the solid-state spin systems. The spectral content of the spin bath that surrounds the electronic spin impurities within the solid-state spin system is extracted, by measuring the coherent evolution and associated decoherence of the spin impurities as a function of number of the applied modulation pulses, and the time-spacing between the pulses. Using these methods, fundamental properties of the spin environment such as the correlation times and the coupling strengths for both electronic and nuclear spins in the spin bath, can be determined.
G06N9900
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods and systems are described for spectral decomposition of composite solid-state spin environments through quantum control of electronic spin impurities. Δ sequence of spin-control modulation pulses are applied to the electronic spin impurities in the solid-state spin systems. The spectral content of the spin bath that surrounds the electronic spin impurities within the solid-state spin system is extracted, by measuring the coherent evolution and associated decoherence of the spin impurities as a function of number of the applied modulation pulses, and the time-spacing between the pulses. Using these methods, fundamental properties of the spin environment such as the correlation times and the coupling strengths for both electronic and nuclear spins in the spin bath, can be determined.
Methods, apparatuses, and embodiments related to a technique for monitoring construction of a structure. In an example, a robot with a sensor, such as a LIDAR device, enters a building and obtains sensor readings of the building. The sensor data is analyzed and components related to the building are identified. The components are mapped to corresponding components of an architect's three dimensional design of the building, and the installation of the components is checked for accuracy. When a discrepancy above a certain threshold is detected, an error is flagged and project managers are notified. Construction progress updates do not give credit for completed construction that includes an error, resulting in improved accuracy progress updates and corresponding improved accuracy for project schedule and cost estimates.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for monitoring construction of a building, the method comprising: receiving, by a computer system, training data that includes three dimensional training point data that corresponds to a plurality of objects associated with a training building and that includes image data that corresponds to a plurality of images of the objects associated with the training building, wherein each point of the three dimensional training point data represents a three dimensional coordinate that corresponds to a surface point of one of the objects associated with the training building, and wherein the training data includes data that identifies the objects associated with the training building; generating a convolution neural network, by the computer system; training the convolution neural network, by the computer system, based on the training data and the data that identifies the objects; receiving, by the computer system, building object data that includes three dimensional point data after a LIDAR system scans objects associated with a building under construction to determine the three dimensional point data; receiving, by the computer system, building object image data that corresponds to images of the objects associated with the building after an imaging device acquires the images of the objects associated with the building; analyzing, by the computer system, by use of the convolution neural network, the building object data and the building object image data to identify the objects associated with the building and to determine physical properties of the objects associated with the building; receiving, by the computer system, building design data that represents physical design plans associated with the building; determining a mapping, by the computer system, of the objects associated with the building to objects associated with the physical design plans of the building; comparing, by the computer system, physical properties of the objects associated with the building to physical properties of the objects associated with the physical design plans of the building; based on the comparison, detecting, by the computer system, a discrepancy beyond a predetermined threshold between a physical property of an object associated with the building and a corresponding physical property of a corresponding object associated with the physical design plans of the building; and sending a message, by the computer system, that indicates the discrepancy. 2. The method of claim 1, wherein the training data is data derived from design data output by a computer-aided design (CAD) application that was used to capture physical design data associated with the training building, wherein the data that identifies the objects associated with the training building are data that was input by use of the CAD application and that labels the objects associated with the training building, wherein physical properties of a first object of the objects associated with the building include any of a dimension of the first object, a shape of the first object, a color of the first object, a surface texture of the first object, or a location of the first object, wherein physical properties of a second object of the objects associated with the physical design plans of the building include any of a dimension of the second object, a shape of the second object, a color of the second object, a surface texture of the second object, or a location of the second object, wherein the first object or the second object are any of a pipe, a beam, a wall, a floor, a ceiling, a toilet, a roof, a door, a door frame, a metal stud, a wood stud, a light fixture, a piece of sheetrock, a water heater, an air conditioner unit, a water fountain, a cabinet, a table, a desk, a refrigerator, or a sink, wherein the imaging device is a camera, a video camera, or a mobile device, wherein the building design data are design data output by the CAD application, and wherein the physical design plans of the building were captured by use of the CAD application. 3. The method of claim 2, wherein the CAD application is AutoCAD from Autodesk, Inc. or MicroStation from Bentley Software, Inc., and wherein the mobile device is any one of a smart phone, a tablet computer, a portable media device, a wearable device, or a laptop computer. 4. The method of claim 1, wherein the discrepancy indicates that a pipe is located in an incorrect location, the method further comprising: receiving data that represents a schedule for construction of the building; determining, based on the received building object data, that causing the pipe to be located in a correct location will cause the schedule for the construction of the building to be delayed; and sending a message that indicates that the construction of the building will be delayed. 5. A method comprising: receiving, by a computer system, sensor data determined based on sensor readings of a structure that is under construction, wherein the sensor data indicates a physical property of an object associated with the structure; analyzing, by the computer system, the sensor data to determine a mapping between the object associated with the structure and a corresponding object of a three dimensional model of the structure; detecting, by the computer system, a discrepancy between the indicated physical property of the object associated with the structure and a physical property of the corresponding object of the three dimensional model; and sending a message, by the computer system, that indicates the discrepancy. 6. The method of claim 5, wherein the sensor data corresponds to data obtained by a LIDAR system based on a scan of the structure, and wherein the sensor readings of the structure are the data obtained by the LIDAR system based on the scan of the structure. 7. The method of claim 6, wherein the sensor data includes three dimensional point data that corresponds to objects associated with the structure. 8. The method of claim 5, wherein the sensor data corresponds to data obtained by an image capture device while capturing an image of the structure, and wherein the sensor readings of the structure are the data obtained by the image capture device while capturing the image of the structure. 9. The method of claim 5, wherein the sensor data corresponds to data obtained by a sonar device while capturing a sonar image of the structure, and wherein the sensor readings of the structure are the data obtained by the sonar device while capturing the sonar image of the structure. 10. The method of claim 5, wherein the sensor data corresponds to data obtained by a radar system while capturing a radar image of the structure, and wherein the sensor readings of the structure are the data obtained by the radar system while capturing the radar image of the structure. 11. The method of claim 5, wherein the analyzing of the sensor data to determine the mapping includes: identifying the object associated with the structure based on a convolution neural network, and determining the mapping based on the identification of the object. 12. The method of claim 5, wherein the structure is any of a building, an airplane, a ship, a submarine, a space launch vehicle, or a space vehicle. 13. The method of claim 5, further comprising: receiving data that correlates to a schedule for construction of the structure; determining, based on the received sensor data, that fixing the discrepancy will cause the schedule for the construction of the structure to be delayed; and sending a message that indicates that fixing the discrepancy will cause the schedule for the construction of the structure to be delayed. 14. The method of claim 5, further comprising: receiving structure design data that represents physical design plans of the structure, wherein the three dimensional model of the structure is based on the structure design data. 15. The method of claim 5, wherein the discrepancy is a difference above a predetermined threshold in a dimension of the object and a corresponding dimension of the corresponding object. 16. The method of claim 5, wherein the discrepancy is a difference in color of a portion of the object and a color of a corresponding portion of the corresponding object. 17. The method of claim 5, wherein the discrepancy is a difference above a predetermined threshold of a location of the object and a location of the corresponding object. 18. A computing system comprising: a processor; a networking interface coupled to the processor; and a memory coupled to the processor and storing instructions which, when executed by the processor, cause the computing system to perform operations including: receiving, via the networking interface, sensor data determined based on sensor readings of a structure, wherein the sensor data indicates a physical property of an object associated with the structure; analyzing the sensor data to determine a mapping between the object associated with the structure and a corresponding object of a three dimensional model of the structure; and detecting a discrepancy between the indicated physical property of the object associated with the structure and a physical property of the corresponding object of the three dimensional model. 19. The computing system of claim 18, further comprising: a LIDAR device, wherein the sensor data includes three dimensional data points determined by the LIDAR device based on a scan of the structure. 20. The computing system of claim 19, further comprising: an image capture device, wherein the sensor data includes data determined by the image capture device based on a captured image of the structure. 21. The computing system of claim 18, wherein the object associated with the building is a component associated with the building, and wherein the corresponding object of the three dimensional model of the structure is a corresponding component of the three dimensional model of the structure.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods, apparatuses, and embodiments related to a technique for monitoring construction of a structure. In an example, a robot with a sensor, such as a LIDAR device, enters a building and obtains sensor readings of the building. The sensor data is analyzed and components related to the building are identified. The components are mapped to corresponding components of an architect's three dimensional design of the building, and the installation of the components is checked for accuracy. When a discrepancy above a certain threshold is detected, an error is flagged and project managers are notified. Construction progress updates do not give credit for completed construction that includes an error, resulting in improved accuracy progress updates and corresponding improved accuracy for project schedule and cost estimates.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods, apparatuses, and embodiments related to a technique for monitoring construction of a structure. In an example, a robot with a sensor, such as a LIDAR device, enters a building and obtains sensor readings of the building. The sensor data is analyzed and components related to the building are identified. The components are mapped to corresponding components of an architect's three dimensional design of the building, and the installation of the components is checked for accuracy. When a discrepancy above a certain threshold is detected, an error is flagged and project managers are notified. Construction progress updates do not give credit for completed construction that includes an error, resulting in improved accuracy progress updates and corresponding improved accuracy for project schedule and cost estimates.
To enable efficient abduction even for observations that are faulty or inadequately modeled, a relaxed abduction problem is proposed in order to explain the largest possible part of the observations with as few assumptions as possible. On the basis of two preference orders over a subset of observations and a subset of assumptions, tuples can therefore be determined such that the theory, together with the subset of assumptions, explains the subset of observations. The formulation as a multi-criteria optimization problem eliminates the need to offset assumptions made and explained observations against one another. Due to the technical soundness of the approach, specific properties of the set of results (such as correctness, completeness etc.), can be checked, which is particularly advantageous in safety-critical applications. The complexity of the problem-solving process can be influenced and therefore flexibly adapted in terms of domain requirements through the selection of the underlying representation language and preference relations. The invention can be applied to any technical system, e.g. plants or power stations.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of actuating a technical system, the method comprising: determining, by a data processor, a relaxed abduction problem; solving, by the data processor, the relaxed abduction problem; and actuating the technical system according to the solution of the relaxed abduction problem. 2. The method as claimed in claim 1, further comprising determining, by the data processor, tuples by taking as a basis two orders of preference over a subset of observations and a subset of assumptions, so that a theory together with the subset of the assumptions explains the subset of the observations. 3. The method as claimed in claim 2, in which the relaxed abduction problem is determined to be RAP=(T, A, O, <A, <O), wherein the theory is T, a set of abducible axioms is A, a set of observations is O with TO , and further comprising taking orders of preference <A⊂P(A)×P(A) and <O⊂P(O)×P(O) as a basis for determining <-minimal tuples (A, O)∈P(A)×P(O), so that T∪A is consistent and t∪A|=O holds. 4. The method as claimed in claim 2, in which the relaxed abduction problem is solved by transforming the relaxed abduction problem into a hypergraph, so that the tuples (A, O) are encoded by pareto-optimal paths in the hypergraph. 5. The method as claimed in claim 4, wherein the pareto-optimal paths are determined via a label approach. 6. The method as claimed in claim 4, further comprising inducing hyperedges of the hypergraph by transcriptions of prescribed rules. 7. The method as claimed in claim 6, wherein the prescribed rules are determined as follows: A ⊑ A 1 A ⊑ B  [ A 1 ⊑ B ∈  ] ( CR1 ) A ⊑ A 1  A ⊑ A 2 A ⊑ B  [ A 1 ⊓ A 2 ⊑ B ∈  ] ( CR2 ) A ⊑ A 1 A ⊑ ∃ r . B  [ A 1 ⊑ ∃ r · B ∈  ] ( CR3 ) A ⊑ ∃ · A 1  A 1 ⊑ A 2 A ⊑ B  [ ∃ r · A 2 ⊑ B ∈  ] ( CR4 ) A ⊑ ∃ r 1 · B A ⊑ ∃ s · B  [ r 1 ⊑ s ∈  ] ( CR5 ) A ⊑ ∃ r 1 · A 1  A 1 ⊑ ∃ r 2 · B A ⊑ ∃ s · B  [ r 1 ∘ r 2 ⊑ s ∈  ] ( CR6 ) 8. The method as claimed in claim 4, wherein a weighted hypergraph HRAP=(V, E), which is induced by the relaxed abduction problem, is determined by V={(AB),(A∃r.B) |A, B ∈NCT, r ∈NR}, wherein VT={(AA),(AT)|A∈NCT}⊂V denotes a set of final states and E denotes a set of the hyperedges e=(T(e), h(e), w(e)), so that the following holds: an axiom a∈T∪A exists that justifies derivation h(e)∈V from T(e)⊂V based on one of the prescribed rules, wherein the edge weight w(e) is determined according to A = { { a } if   a ∈ A , ∅ otherwise   O = { { h  ( e ) } if   h  ( e ) ∈ O , ∅ otherwise 9. The method as claimed in claim 8, wherein pX,t=(VX,t,EX,t) is determined as a hyperpath in H=(V, E) from X to t if (1) t∈X and pX,t=({t}, ) or (2) there is an edge e∈E, so that h(e)=t, T(e)={y1, . . . , yk} holds. 10. The method as claimed in claim 9, wherein shortest hyperpaths are determined by taking account of two preferences. 11. The method as claimed in claim 10, wherein the shortest hyperpaths are determined by taking account of two preferences via a label correction algorithm. 12. The method as claimed in claim 11, wherein the labels encode pareto-optimal paths to the hitherto found nodes of the hypergraph. 13. The method as claimed in claim 12, wherein alterations along the hyperedges are propagated by a meet operator and/or by a join operator. 14. The method as claimed in claim 1, wherein the relaxed abduction problem is determined via a piece of description logic. 15. An apparatus for actuating a technical system by the data processor performing the method as claimed in claim 1.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: To enable efficient abduction even for observations that are faulty or inadequately modeled, a relaxed abduction problem is proposed in order to explain the largest possible part of the observations with as few assumptions as possible. On the basis of two preference orders over a subset of observations and a subset of assumptions, tuples can therefore be determined such that the theory, together with the subset of assumptions, explains the subset of observations. The formulation as a multi-criteria optimization problem eliminates the need to offset assumptions made and explained observations against one another. Due to the technical soundness of the approach, specific properties of the set of results (such as correctness, completeness etc.), can be checked, which is particularly advantageous in safety-critical applications. The complexity of the problem-solving process can be influenced and therefore flexibly adapted in terms of domain requirements through the selection of the underlying representation language and preference relations. The invention can be applied to any technical system, e.g. plants or power stations.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: To enable efficient abduction even for observations that are faulty or inadequately modeled, a relaxed abduction problem is proposed in order to explain the largest possible part of the observations with as few assumptions as possible. On the basis of two preference orders over a subset of observations and a subset of assumptions, tuples can therefore be determined such that the theory, together with the subset of assumptions, explains the subset of observations. The formulation as a multi-criteria optimization problem eliminates the need to offset assumptions made and explained observations against one another. Due to the technical soundness of the approach, specific properties of the set of results (such as correctness, completeness etc.), can be checked, which is particularly advantageous in safety-critical applications. The complexity of the problem-solving process can be influenced and therefore flexibly adapted in terms of domain requirements through the selection of the underlying representation language and preference relations. The invention can be applied to any technical system, e.g. plants or power stations.
The structure of an untagged document can be derived using a predictive model that is trained in a supervised learning framework based on a corpus of tagged training documents. Analyzing the training documents results in a plurality of document part feature vectors, each of which correlates a category defining a document part (for example, “title” or “body paragraph”) with one or more feature-value pairs (for example, “font=Arial” or “alignment=centered”). Any suitable machine learning algorithm can be used to train the predictive model based on the document part feature vectors extracted from the training documents. Once the predictive model has been trained, it can receive feature-value pairs corresponding to a portion of an untagged document and make predictions with respect to the how that document part should be categorized. The predictive model can therefore generate tag metadata that defines a structure of the untagged document in an automated fashion.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A document structure extraction method comprising: accessing, by a document structure analytics server, an untagged document that comprises a plurality of document parts, wherein certain of the document parts have a visual appearance that is defined by formatting information included in the untagged document, and wherein at least two of the document parts are distinguishable from each other based on having distinctive visual appearances; extracting at least a portion of the formatting information from the untagged document; for a particular one of the plurality of document parts, generating one or more feature-value pairs using the extracted formatting information, wherein each of the generated feature-value pairs characterizes the visual appearance of the particular document part by associating a particular value with a particular formatting feature; using a predictive model to predict a categorization for the particular document part based on the one or more feature-value pairs, wherein the predictive model applies a machine learning algorithm to make predictions based on a collection of categorized feature-value pairs aggregated from a corpus of tagged training documents; and defining tag metadata that associates the particular document part with the predicted categorization generated by the predictive model. 2. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a font size formatting feature with a particular font size value. 3. The document structure extraction method of claim 1, further comprising: identifying a characteristic of the untagged document; and selecting the predictive model based on the corpus of tagged training documents also having the identified characteristic. 4. The document structure extraction method of claim 1, wherein accessing the untagged document further comprises receiving the untagged document from a client computing device; and the method further comprises applying the tag metadata to the untagged document to produce a tagged document, and sending the tagged document to the client computing device. 5. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a font size formatting feature with a particular value that is selected from a group consisting of a largest font in the untagged document, an intermediate-sized font in the untagged document, and a smallest font in the untagged document. 6. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a font size formatting feature with a particular value that is selected from a group consisting of a font size that is larger than a preceding paragraph, a font size that is smaller than the preceding paragraph, a font size that is larger than a following paragraph, and a font size that is smaller than the following paragraph. 7. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a formatting feature with a particular value that defines the formatting feature for a first document part in relation to the formatting feature for a second document part. 8. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a particular value selected from a group consisting of left justification, center justification, right justification, and full justification with a paragraph alignment formatting feature. 9. The document structure extraction method of claim 1, further comprising using the predictive model to determine a confidence level in the categorization for the particular document part. 10. The document structure extraction method of claim 1, wherein accessing the untagged document further comprises receiving, from a document viewer executing on a client computing device, the plurality of document parts and the formatting information. 11. The document structure extraction method of claim 1, wherein accessing the untagged document further comprises receiving, by the document structure analytics server, a plurality of untagged documents from a document management system. 12. The document structure extraction method of claim 1, further comprising sending the tag metadata from the document structure analytics server to a client computing device, wherein the untagged document is stored at the client computing device. 13. The document structure extraction method of claim 1, further comprising embedding the tag metadata into the untagged document to produce a tagged document, wherein sending the tag metadata to the client computing device comprises sending the tagged document to the client computing device. 14. The document structure extraction method of claim 1, further comprising modifying the untagged document such that the visual appearance of the particular document part is further defined by the predicted categorization generated by the predictive model. 15. A non-transitory computer readable medium encoded with instructions that, when executed by one or more processors, cause a document structure analysis process to be invoked, the process comprising: identifying a plurality of training documents; accessing a particular one of the training documents, the particular training document comprising a plurality of document parts, wherein a particular one of the document parts has (a) a visual appearance defined by formatting information included in the particular training document, and (b) a document part categorization; generating, for the particular document part, one or more feature-value pairs using the formatting information, wherein each of the generated one or more feature-value pairs characterizes the visual appearance of the particular document part by correlating a particular value with a particular formatting feature; defining a document part feature vector that links the generated one or more feature-value pairs with the document part categorization; storing the document part feature vector in a memory resource hosted by a document structure analytics server; and using the document part feature vector to train a predictive model in a supervised learning framework, wherein the predictive model is configured to establish a predicted document part categorization based on at least one feature-value pair received from a client computing device. 16. The non-transitory computer readable medium of claim 15, wherein: a particular one of the generated feature-value pairs defines a proportion of the particular training document; and the document part categorization is selected from a group consisting of a heading, a title, and a body paragraph. 17. The non-transitory computer readable medium of claim 15, wherein: the plurality of training documents are identified on the basis of a common characteristic that is selected from a group consisting of an author and a topic keyword; and the predictive model is associated with the common characteristic. 18. A document structure evaluation system that comprises a memory device and a processor that is operatively coupled to the memory device, wherein the processor is configured to execute instructions stored in the memory that, when executed, cause the processor to carry out a document structure evaluation process that comprises: displaying, in a document viewer, an untagged document that comprises a plurality of document parts, wherein certain of the document parts have a visual appearance that is defined by formatting information included in the untagged document, and wherein at least two of the document parts are distinguishable from each other based on having distinctive visual appearances; sending, to a document structure analytics server, a particular one of the document parts and formatting information that characterizes the visual appearance of the particular document part; receiving, from the document structure analytics server, a predicted categorization for the particular document part; and embedding into the untagged document metadata that correlates the particular document part with the predicted categorization received from the document structure analytics server. 19. The document structure evaluation system of claim 18, wherein the process further comprises: receiving, from the document structure analytics server, a confidence level associated with the predicted categorization; and displaying, in the document viewer, the predicted categorization and the confidence level. 20. The document structure evaluation system of claim 18, wherein the process further comprises: displaying, in the document viewer, the predicted categorization; and receiving, from a user of the document viewer, an acceptance of the predicted categorization, wherein the acceptance is received before the metadata is embedded into the untagged document.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The structure of an untagged document can be derived using a predictive model that is trained in a supervised learning framework based on a corpus of tagged training documents. Analyzing the training documents results in a plurality of document part feature vectors, each of which correlates a category defining a document part (for example, “title” or “body paragraph”) with one or more feature-value pairs (for example, “font=Arial” or “alignment=centered”). Any suitable machine learning algorithm can be used to train the predictive model based on the document part feature vectors extracted from the training documents. Once the predictive model has been trained, it can receive feature-value pairs corresponding to a portion of an untagged document and make predictions with respect to the how that document part should be categorized. The predictive model can therefore generate tag metadata that defines a structure of the untagged document in an automated fashion.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The structure of an untagged document can be derived using a predictive model that is trained in a supervised learning framework based on a corpus of tagged training documents. Analyzing the training documents results in a plurality of document part feature vectors, each of which correlates a category defining a document part (for example, “title” or “body paragraph”) with one or more feature-value pairs (for example, “font=Arial” or “alignment=centered”). Any suitable machine learning algorithm can be used to train the predictive model based on the document part feature vectors extracted from the training documents. Once the predictive model has been trained, it can receive feature-value pairs corresponding to a portion of an untagged document and make predictions with respect to the how that document part should be categorized. The predictive model can therefore generate tag metadata that defines a structure of the untagged document in an automated fashion.
Convolution processing performance in digital image processing is enhanced using a data packing process for convolutional layers in deep neural networks and corresponding computation kernel code. The data packing process includes an input and weight packing of the input channels of data into a contiguous block of memory in preparation for convolution. In addition, data packing process includes an output unpacking process for unpacking convolved data into output channel blocks of memory, where the input channel block and output channel block sizes are configured for efficient data transfer and data reuse during convolution. The input packing and output packing processes advantageously improve convolution performance and conserve power while satisfying the real-time demands of digital image processing.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method of managing data for convolution processing, the method comprising: in a device having: a memory, an input channel for receiving a stack of input data, an output channel for receiving a stack of output data, and a convolution kernel containing a stack of weights for convolving the stack of input data into the stack of output data; packing the input stack into a continuous block of memory, packing the convolution kernel into a continuous block of memory, and unpacking the output stack based on the architecture of the device; and convolving the input stack into the output stack using the stack of weights in the convolution kernel. 2. A computer-implemented method as in claim 1, wherein packing the input stack into the continuous block of memory includes: reading all input blocks in the input stack corresponding to a portion of the input data; and arranging all of the input blocks into the continuous block of memory. 3. A computer-implemented method as in claim 2, wherein the portion of the input data to which the input blocks correspond is one or more input pixels and their neighboring pixels. 4. A computer-implemented method as in claim 1, wherein packing the output stack based on the architecture of the device is allocating a set of output blocks in the output stack to use a maximum number of registers, the set of output blocks in the output stack corresponding to the portion of input data being convolved. 5. A computer-implemented method as in claim 4, wherein the portion of input data being convolved is any one of a continuous row and a continuous column of input pixels. 6. A computer-implemented method as in claim 4, wherein convolving the input stack into the output stack includes: loading into memory the stack of weights corresponding to the portion of input data; arranging the loaded weights into a convolution weight matrix; calculating each value in the allocated set of output blocks in the output stack from the corresponding values in the input blocks and the convolution weight matrix.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Convolution processing performance in digital image processing is enhanced using a data packing process for convolutional layers in deep neural networks and corresponding computation kernel code. The data packing process includes an input and weight packing of the input channels of data into a contiguous block of memory in preparation for convolution. In addition, data packing process includes an output unpacking process for unpacking convolved data into output channel blocks of memory, where the input channel block and output channel block sizes are configured for efficient data transfer and data reuse during convolution. The input packing and output packing processes advantageously improve convolution performance and conserve power while satisfying the real-time demands of digital image processing.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Convolution processing performance in digital image processing is enhanced using a data packing process for convolutional layers in deep neural networks and corresponding computation kernel code. The data packing process includes an input and weight packing of the input channels of data into a contiguous block of memory in preparation for convolution. In addition, data packing process includes an output unpacking process for unpacking convolved data into output channel blocks of memory, where the input channel block and output channel block sizes are configured for efficient data transfer and data reuse during convolution. The input packing and output packing processes advantageously improve convolution performance and conserve power while satisfying the real-time demands of digital image processing.
A system for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven simulations of alternate candidate business action comprising a business data retrieval engine stored in a memory of and operating on a processor of a computing device, a business data analysis engine stored in a memory of and operating on a processor of a computing device and a business decision and business action path simulation engine stored in a memory of and operating on a processor of one of more computing devices has been developed.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven simulations of alternate candidate business decision comprising: a business data retrieval engine stored in a memory of and operating on a processor of a computing device; a business data analysis engine stored in a memory of and operating on a processor of a computing device; and a business decision and business action path simulation engine stored in a memory of and operating on a processor of one of more computing devices; wherein, the business information retrieval engine: (a) retrieves a plurality of business related data from a plurality of sources; (b) accept a plurality of analysis parameters and control commands directly from human interface devices or from one or more command and control storage devices; (b) stores accumulated retrieved information for processing by data analysis engine or predetermined data timeout; wherein the business information analysis engine: (c) retrieves a plurality of data types from the business information retrieval engine; (d) performs a plurality of analytical functions and transformations on retrieved data based upon the specific goals and needs set forth in a current campaign by business process analysis authors; wherein the business decision and business action path simulation engine: e) employs results of data analyses and transformations performed by the business information analysis engine, together with available supplemental data from a plurality of sources as well as any current campaign specific machine learning, commands and parameters from business process analysis authors to formulate current business operations and risk status reports; and (f) employs results of data analyses and transformations performed by the business information analysis engine, together with available supplemental data from a plurality of sources, any current campaign specific commands and parameters from business process analysis authors, as well as input gleaned from machine learned algorithms to deliver business action pathway simulations and business decision support to a first end user. 2. The system of claim 1, wherein the business information retrieval engine a stored in the memory of and operating on a processor of a computing device, employs a portal for human interface device input at least a portion of which are business related data and at least another portion of which are commands and parameters related to the conduct of a current business analysis campaign. 3. The system of claim 2, wherein the business information retrieval engine employs a high volume deep web scraper stored in the memory of an operating on a processor of a computing device, which receives at least some scrape control and spider configuration parameters from the highly customizable cloud based interface, coordinates one or more world wide web searches (scrapes) using both general search control parameters and individual web search agent (spider) specific configuration data, receives scrape progress feedback information which may lead to issuance of further web search control parameters, controls and monitors the spiders on distributed scrape servers, receives the raw scrape campaign data from scrape servers, aggregates at least portions of scrape campaign data from each web site or web page traversed as per the parameters of the scrape campaign. 4. The system of claim 3, wherein the archetype spiders are provided by a program library and individual spiders are created using configuration files. 5. The system of claim 3, wherein scrape campaign requests are persistently stored and can be reused or used as the basis for similar scrape campaigns. 6. The system of claim 2, wherein the business information retrieval engine employs a multidimensional time series data store stored in a memory of and operating on a processor of a computing device to receive a plurality of data from a plurality of sensors of heterogeneous types, some of which may have heterogeneous reporting and data payload transmission profiles, aggregates the sensor data over a predetermined amount of time, a predetermined quantity of data or a predetermined number of events, retrieves a specific quantity of aggregated sensor data per each access connection predetermined to allow reliable receipt and inclusion of the data, transparently retrieves quantities of aggregated sensor data too large to be reliably transferred by one access connection using a further plurality access connections to allow capture of all aggregated sensor data under conditions of heavy sensor data influx and stores aggregated sensor data in a simple key-value pair with very little or no data transformation from how the aggregated sensor data is received. 7. The system of claim 1, wherein the business data analysis engine employs a directed computational graph stored in the memory of an operating on a processor of a computing device which, retrieves streams of input from one or more of a plurality of data sources, filters data to remove data records from the stream for a plurality of reasons drawn from, but not limited to a set comprising absence of all information, damage to data in the record, and presence of in-congruent information or missing information which invalidates the data record, splits filtered data stream into two or more identical parts, formats data within one data stream based upon a set of predetermined parameters so as to prepare for meaningful storage in a data store, sends identical data stream further analysis and either linear transformation or branching transformation using resources of the system. 8. A method for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven business decision simulations method comprising the steps of: (a) retrieving business related data and analysis campaign command and control information using a business information retrieval engine stored in the memory of an operating on a processor of a computing device; (b) analyzing and transforming retrieved business related data using a business information analysis engine stored in the memory of an operating on a processor of a computing device in conjunction with previously designed analysis campaign command and control information; and (c) presenting business decision critical information as well as business pathway simulation information using a business decision and business path simulation engine based upon the results of analysis of previously retrieved business related data and previously entered analysis campaign command and control information. 9. The method of claim 8, wherein the business information retrieval engine employs, a portal for human interface device input at least a portion of which are business related data and at least another portion of which are commands and parameters related to the conduct of a current business analysis campaign. 10. The method of claim 9, wherein the business information retrieval engine employs a high volume deep web scraper stored in the memory of an operating on a processor of a computing device, which receives at least some scrape control and spider configuration parameters from the highly customizable cloud based interface, coordinates one or more world wide web searches (scrapes) using both general search control parameters and individual web search agent (spider) specific configuration data, receives scrape progress feedback information which may lead to issuance of further web search control parameters, controls and monitors the spiders on distributed scrape servers, and receives the raw scrape campaign data from scrape servers, aggregates at least portions of scrape campaign data from each web site or web page traversed as per the parameters of the scrape campaign. 11. The method of claim 10, wherein the archetype spiders are provided by a program library and individual spiders are created using configuration files. 12. The method of claim 10, wherein scrape campaign requests are persistently stored and can be reused or used as the basis for similar scrape campaigns. 13. The method of claim 9, wherein the business information retrieval engine employs a multidimensional time series data store stored in a memory of and operating on a processor of a computing device to receive a plurality of data from a plurality of sensors of heterogeneous types, some of which may have heterogeneous reporting and data payload transmission profiles, aggregates the sensor data over a predetermined amount of time, a predetermined quantity of data or a predetermined number of events, retrieves a specific quantity of aggregated sensor data per each access connection predetermined to allow reliable receipt and inclusion of the data, transparently retrieves quantities of aggregated sensor data too large to be reliably transferred by one access connection using a further plurality access connections to allow capture of all aggregated sensor data under conditions of heavy sensor data influx and stores aggregated sensor data in a simple key-value pair with very little or no data transformation from how the aggregated sensor data is received. 14. The system of claim 8, wherein the business data analysis engine employs a directed computational graph, stored in the memory of an operating on a processor of a computing device which, retrieves streams of input from one or more of a plurality of data sources, filters data to remove data records from the stream for a plurality of reasons drawn from, but not limited to a set comprising absence of all information, damage to data in the record, and presence of in-congruent information or missing information which invalidates the data record, splits filtered data stream into two or more identical parts, formats data within one data stream based upon a set of predetermined parameters so as to prepare for meaningful storage in a data store, sends identical data stream further analysis and either linear transformation or branching transformation using resources of the system.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven simulations of alternate candidate business action comprising a business data retrieval engine stored in a memory of and operating on a processor of a computing device, a business data analysis engine stored in a memory of and operating on a processor of a computing device and a business decision and business action path simulation engine stored in a memory of and operating on a processor of one of more computing devices has been developed.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven simulations of alternate candidate business action comprising a business data retrieval engine stored in a memory of and operating on a processor of a computing device, a business data analysis engine stored in a memory of and operating on a processor of a computing device and a business decision and business action path simulation engine stored in a memory of and operating on a processor of one of more computing devices has been developed.
A method of avoiding harmful chemical emission concentration levels, the method comprising implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels, detecting an indicator, and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing prescribed personal protective equipment.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of avoiding harmful chemical emission concentration levels, comprising: implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels; detecting an indicator; and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing prescribed personal protective equipment. 2. The method of claim 1, wherein the cognitive suite of workplace hygiene and injury predictors has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels by receiving signals from one or more sensors, correlating the signals with scheduled operations, and identifying indicators corresponding to one or more chemical emission sources operating at the time of the harmful chemical emission concentration levels based on the scheduled operation. 3. The method of claim 1, which further comprises predicting chemical emission exposure levels of a person, tracking cumulative actual chemical emission exposure levels for the person, and pre-emptively adjusting the time of a scheduled task in anticipation of predicted chemical emission exposure levels. 4. The method of claim 3, wherein predicting chemical emission exposure levels includes determining the location of the person for one or more assigned tasks, identifying a path used by the person to transit to the location(s), analyzing a chemical emissions map for the location(s), and calculating a predicted amount of cumulative chemical emissions exposure for the person. 5. The method of claim 3, which further comprises monitoring the actual chemical emission exposure levels experienced by the person in a chemical emissions zone. 6. The method of claim 5, which further comprises identifying the location of a person in the chemical emissions zone, and transmitting a control signal to the chemical emissions source to slow down or turn off for a predetermined period of time to reduce the actual chemical emission exposure levels in the chemical emission zone. 7. The method of claim 1, wherein the indicator is identification by facial recognition, activation of an interlock, detection of an RFID at a portal, or combinations thereof. 8. A chemical emissions protection system, comprising: a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emissions sources and indicators of harmful chemical emission concentration levels; a monitoring interface coupled to one or more sensor(s) for detecting an indicator; and a warning system configured to implement a corrective action by altering the operation of a chemical emission source, modifying a time of a scheduled task, and/or changing prescribed personal protective equipment. 9. The system of claim 8, wherein the cognitive suite of workplace hygiene and injury predictors is trained by receiving signals from one or more sensors, correlating the signals with scheduled operations, and identifying indicators corresponding to one or more chemical emission sources operating at the time of the harmful chemical emission concentration levels based on the scheduled operation. 10. The system of claim 8, which further comprises a scheduler configured to predict chemical emission exposure levels of a person, track cumulative actual chemical emission exposure levels for the person, and pre-emptively adjust the time of a scheduled task in anticipation of predicted chemical emission exposure levels. 11. The system of claim 10, wherein the monitoring interface is configured to determine the location of the person for one or more assigned tasks, identify a path used by the person to transit to the location(s), analyze a chemical emissions map for the location(s), and calculate a predicted amount of cumulative chemical emissions exposure for the person. 12. The system of claim 10, wherein the monitoring interface is configured to monitor the actual chemical emission exposure levels experienced by the person in a chemical emissions zone. 13. The system of claim 12, wherein the monitoring interface is configured to identify the location of a person in the chemical emissions zone, and transmit a control signal to the chemical emissions source to slow down or turn off for a predetermined period of time to reduce the actual chemical emission exposure levels in the chemical emissions zone. 14. The system of claim 8, wherein the indicator is identification by facial recognition, activation of an interlock, detection of an RFID at a portal, or combinations thereof. 15. A non-transitory computer readable storage medium comprising a computer readable program for predicting exposure to harmful chemical emission concentration levels, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels; detecting an indicator; and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing the prescribed personal protective equipment. 16. The non-transitory computer readable storage medium of claim 15, wherein the computer readable program when executed on a computer causes the computer to: learn to identify chemical emission sources and indicators of harmful chemical emission concentration levels by receiving signals from one or more sensors, correlating the signals with scheduled operations, and identifying indicators corresponding to one or more chemical emission sources operating at the time of the harmful chemical emission concentration levels based on the scheduled operation. 17. The non-transitory computer readable storage medium of claim 15, wherein the computer readable program when executed on a computer causes the computer to: predict chemical emission exposure levels of a person, track cumulative actual chemical emission exposure levels for the person, and pre-emptively adjust the time of a scheduled task in anticipation of predicted chemical emission exposure levels. 18. The non-transitory computer readable storage medium of claim 17, wherein the computer readable program when executed on a computer causes the computer to: predict chemical emission exposure levels by determining the location of the person for one or more assigned tasks, identifying a path used by the person to transit to the location(s), analyzing a chemical emissions map for the location(s), and calculating a predicted amount of cumulative chemical emissions exposure for the person. 19. The non-transitory computer readable storage medium of claim 17, wherein the computer readable program when executed on a computer causes the computer to: monitor the actual chemical emission exposure levels experienced by the person in a chemical emissions zone. 20. The non-transitory computer readable storage medium of claim 19, wherein the computer readable program when executed on a computer causes the computer to: identify the location of a person in the chemical emissions zone, and transmit a control signal to the chemical emissions source to slow down or turn off for a predetermined period of time to reduce the actual chemical emission exposure levels in the chemical emissions zone.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of avoiding harmful chemical emission concentration levels, the method comprising implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels, detecting an indicator, and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing prescribed personal protective equipment.
G06N5043
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of avoiding harmful chemical emission concentration levels, the method comprising implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels, detecting an indicator, and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing prescribed personal protective equipment.
A constraint problem may be represented as a digital circuit comprising at least one gate and at least one constrained input or at least one constrained output, or a combination of at least one constrained input and at least one constrained output. A matrix may be generated for each of the at least one gates. A constraint matrix may be generated for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. A final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix may be generated. The final matrix may be translated into an energy representation useable by a quantum computer. The energy of the energy representation may be minimized to generate a q-bit output, and a result of the constraint problem may be determined based on the q-bit output.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of formatting a constraint problem for input to a quantum processor and solving the constraint problem, the method comprising: representing, with a classical processor, a quantum processor, or a combination thereof, the constraint problem as a digital circuit comprising at least one gate and at least one constrained input, at least one constrained output, or a combination of at least one constrained input and at least one constrained output; generating, with the classical processor, the quantum processor, or the combination thereof, a matrix for each of the at least one gates; generating, with the classical processor, the quantum processor, or the combination thereof, a constraint matrix for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output; generating, with the classical processor, the quantum processor, or the combination thereof, a final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix; translating, with the classical processor, the quantum processor, or the combination thereof, the final matrix into an energy representation useable by the quantum processor. minimizing, with the quantum processor, an energy of the energy representation to generate a quantum bit (q-bit) output; and determining, with the classical processor, the quantum processor, or the combination thereof, a result of the constraint problem based on the q-bit output. 2. The method of claim 1, wherein the translating comprises interpreting the final matrix as a Hamiltonian energy matrix. 3. The method of claim 2, wherein the Hamiltonian energy matrix comprises a spin glass Hamiltonian energy matrix. 4. The method of claim 2, wherein the Hamiltonian energy matrix represents each of the at least one constrained inputs, each of the at least one constrained outputs, or each of the combination of at least one constrained input and at least one constrained output as a row and column entry in the Hamiltonian energy matrix. 5. The method of claim 2, further comprising converting, with the classical processor, the quantum processor, or the combination thereof, the Hamiltonian energy matrix into an appropriate form for the quantum computer used to minimize the energy of the Hamiltonian energy matrix. 6. The method of claim 1, wherein the representing further comprises assigning a label to each of a plurality of intermediate outputs within the digital circuit. 7. The method of claim 1, wherein the representing further comprises assigning a label to each of the at least one gates. 8. The method of claim 1, wherein the digital circuit comprises at least one two-input logic gate selected from a set of universal gates. 9. The method of claim 8, wherein the set of universal gates comprises eight two-input gates formed by all two-input combinations of AND and OR with optional NOT functionality on one or both of the inputs. 10. The method of claim 1, wherein the digital circuit comprises at least one sub-circuit that evaluates to true when constraints on an input are satisfied and an output of the sub-circuit is constrained to be true. 11. The method of claim 1, further comprising converting, with the classical processor, the quantum processor, or the combination thereof, the digital circuit into a table comprising data about the at least one gate and the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. 12. The method of claim 1, wherein generating the matrix for each of the at least one gates comprises: computing a permutation matrix for the gate; choosing a gate matrix based on a gate type of the gate; and multiplying a transpose of the permutation matrix, the gate matrix, and the permutation matrix to form the matrix for the gate. 13. The method of claim 1, wherein generating the final matrix comprises: adding each matrix for each of the at least one gates together to create a circuit matrix; and adding the constraint matrix to the circuit matrix. 14. The method of claim 1, wherein the quantum processor uses adiabatic quantum computing. 15. The method of claim 1, wherein the digital circuit represents a cryptographic function, a cryptographic algorithm, or a traveling salesman problem. 16. The method of claim 15, wherein the cryptographic function is a one-way function. 17. A system for formatting a constraint problem for input to a quantum computer and solving the constraint problem, the system comprising: a classical computer configured to: represent the constraint problem as a digital circuit comprising at least one gate and at least one constrained input, at least one constrained output, or a combination of at least one constrained input and at least one constrained output; generate a matrix for each of the at least one gates; generate a constraint matrix for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output; generate a final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix; and translate the final matrix into an energy representation useable by the quantum computer; and the quantum computer configured to: minimize an energy of the energy representation to generate a quantum bit (q-bit) output; wherein the classical computer is further configured to determine a result of the constraint problem based on the q-bit output. 18. The system of claim 17, wherein the translating comprises interpreting the final matrix as a Hamiltonian energy matrix. 19. The system of claim 18, wherein the Hamiltonian energy matrix comprises a spin glass Hamiltonian energy matrix. 20. The system of claim 18, wherein the Hamiltonian energy matrix represents each of the at least one constrained inputs, each of the at least one constrained outputs, or each of the combination of at least one constrained input and at least one constrained output as a row and column entry in the Hamiltonian energy matrix. 21. The system of claim 18, wherein the classical computer is further configured to convert the Hamiltonian energy matrix into an appropriate form for the quantum computer used to minimize the energy of the Hamiltonian energy matrix. 22. The system of claim 17, wherein the representing further comprises assigning a label to each of a plurality of intermediate outputs within the digital circuit. 23. The system of claim 17, wherein the representing further comprises assigning a label to each of the at least one gates. 24. The system of claim 17, wherein the digital circuit comprises at least one two-input logic gate selected from a set of universal gates. 25. The system of claim 24, wherein the set of universal gates comprises eight two-input gates formed by all two-input combinations of AND and OR with optional NOT functionality on one or both of the inputs. 26. The system of claim 17, wherein the digital circuit comprises at least one sub-circuit that evaluates to true when constraints on an input are satisfied and an output of the sub-circuit is constrained to be true. 27. The system of claim 17, wherein the classical computer is further configured to convert the digital circuit into a table comprising data about the at least one gate and the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. 28. The system of claim 17, wherein generating the matrix for each of the at least one gates comprises: computing a permutation matrix for the gate; choosing a gate matrix based on a gate type of the gate; and multiplying a transpose of the permutation matrix, the gate matrix, and the permutation matrix to form the matrix for the gate. 29. The system of claim 17, wherein generating the final matrix comprises: adding each matrix for each of the at least one gates together to create a circuit matrix; and adding the constraint matrix to the circuit matrix. 30. The system of claim 17, wherein the quantum computer uses adiabatic quantum computing. 31. The system of claim 17, wherein the digital circuit represents a cryptographic function, a cryptographic algorithm, or a traveling salesman problem. 32. The system of claim 31, wherein the cryptographic function is a one-way function. 33. A quantum computer configured to: represent a constraint problem as a digital circuit comprising at least one gate and at least one constrained input, at least one constrained output, or a combination of at least one constrained input and at least one constrained output; generate a matrix for each of the at least one gates; generate a constraint matrix for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output; generate a final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix; translate the final matrix into an energy representation useable by the quantum computer; minimize an energy of the energy representation to generate a quantum bit (q-bit) output; and determine a result of the constraint problem based on the q-bit output. 34. The quantum computer of claim 33, wherein the translating comprises interpreting the final matrix as a Hamiltonian energy matrix. 35. The quantum computer of claim 34, wherein the Hamiltonian energy matrix comprises a spin glass Hamiltonian energy matrix. 36. The quantum computer of claim 34, wherein the Hamiltonian energy matrix represents each of the at least one constrained inputs, each of the at least one constrained outputs, or each of the combination of at least one constrained input and at least one constrained output as a row and column entry in the Hamiltonian energy matrix. 37. The quantum computer of claim 34, wherein the quantum computer is further configured to convert the Hamiltonian energy matrix into an appropriate form for the quantum computer used to minimize the energy of the Hamiltonian energy matrix. 38. The quantum computer of claim 33, wherein the representing further comprises assigning a label to each of a plurality of intermediate outputs within the digital circuit. 39. The quantum computer of claim 33, wherein the representing further comprises assigning a label to each of the at least one gates. 40. The quantum computer of claim 33, wherein the digital circuit comprises at least one two-input logic gate selected from a set of universal gates. 41. The quantum computer of claim 40, wherein the set of universal gates comprises eight two-input gates formed by all two-input combinations of AND and OR with optional NOT functionality on one or both of the inputs. 42. The quantum computer of claim 33, wherein the digital circuit comprises at least one sub-circuit that evaluates to true when constraints on an input are satisfied and an output of the sub-circuit is constrained to be true. 43. The quantum computer of claim 33, wherein the quantum computer is further configured to convert the digital circuit into a table comprising data about the at least one gate and the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. 44. The quantum computer of claim 33, wherein generating the matrix for each of the at least one gates comprises: computing a permutation matrix for the gate; choosing a gate matrix based on a gate type of the gate; and multiplying a transpose of the permutation matrix, the gate matrix, and the permutation matrix to form the matrix for the gate. 45. The quantum computer of claim 33, wherein generating the final matrix comprises: adding each matrix for each of the at least one gates together to create a circuit matrix; and adding the constraint matrix to the circuit matrix. 46. The quantum computer of claim 33, wherein the quantum computer uses adiabatic quantum computing. 47. The quantum computer of claim 33, wherein the digital circuit represents a cryptographic function, a cryptographic algorithm, or a traveling salesman problem. 48. The quantum computer of claim 47, wherein the cryptographic function is a one-way function.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A constraint problem may be represented as a digital circuit comprising at least one gate and at least one constrained input or at least one constrained output, or a combination of at least one constrained input and at least one constrained output. A matrix may be generated for each of the at least one gates. A constraint matrix may be generated for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. A final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix may be generated. The final matrix may be translated into an energy representation useable by a quantum computer. The energy of the energy representation may be minimized to generate a q-bit output, and a result of the constraint problem may be determined based on the q-bit output.
G06N99002
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A constraint problem may be represented as a digital circuit comprising at least one gate and at least one constrained input or at least one constrained output, or a combination of at least one constrained input and at least one constrained output. A matrix may be generated for each of the at least one gates. A constraint matrix may be generated for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. A final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix may be generated. The final matrix may be translated into an energy representation useable by a quantum computer. The energy of the energy representation may be minimized to generate a q-bit output, and a result of the constraint problem may be determined based on the q-bit output.
At a machine learning service, a determination is made that an analysis to detect whether at least a portion of contents of one or more observation records of a first data set are duplicated in a second set of observation records is to be performed. A duplication metric is obtained, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set. In response to determining that the duplication metric meets a threshold criterion, one or more responsive actions are initiated, such as the transmission of a notification to a client of the service.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system, comprising: one or more computing devices configured to: generate, at a machine learning service of a provider network, one or more space-efficient representations of a first set of observation records associated with a machine learning model, wherein individual ones of the space-efficient representations utilize less storage than the first set of observation records, and wherein at least a subset of observation records of the first set include respective values of a first group of one or more variables; receive an indication that a second set of observation records is to be examined for the presence of duplicates of observation records of the first set in accordance with a probabilistic duplicate detection technique, wherein at least a subset of observation records of the second set include respective values of the first group of one or more variables; obtain, using at least one space-efficient representation of the one or more space-efficient representations, a duplication metric corresponding to at least a portion of the second set, indicative of a non-zero probability that one or more observation records of the second set are duplicates of one or more observation records of the first set with respect to at least the first group of one or more variables; and in response to a determination that the duplication metric meets a threshold criterion, implement one or more responsive actions including a notification of a detection of potential duplicate observation records to the client. 2. The system as recited in claim 1, wherein a particular space-efficient representation of the one or more space-efficient representations includes one or more of: (a) a Bloom filter, (b) a quotient filter, or (c) a skip list. 3. The system as recited in claim 1, wherein the first set of one or more observation records comprises a training data set of the machine learning model, and wherein the second set of one or more observation records comprises a test data set of the machine learning model. 4. The system as recited in claim 1, wherein a particular space-efficient representation of the one or more space-efficient representations includes a Bloom filter, wherein the one or more computing devices are further configured to: estimate, prior to generating the Bloom filter, (a) an approximate count of observation records included in the first set and (b) an approximate size of individual observation records of the first set; and determine, based at least in part on the approximate count or the approximate size, one or more parameters to be used to generate the Bloom filter, including one or more of: (a) a number of bits to be included in the Bloom filter (b) a number of hash functions to be used to generate the Bloom filter, or (c) a particular type of hash function to be used to generate the Bloom filter. 5. The system as recited in claim 1, wherein the one or more responsive actions include one or more of: (a) a transmission of an indication, to the client, of a particular observation record of the second set which has been identified as having a non-zero probability of being a duplicate, (b) a removal, from the second set, of a particular observation record which has been identified as having a non-zero probability of being a duplicate, prior to performing a particular machine learning task using the second set, (c) a transmission, to the client, of an indication of a potential prediction error associated with removing, from the second set, one or more observation records which have been identified as having non-zero probabilities of being duplicates, or (d) a cancellation of a machine learning job associated with the second set. 6. A method, comprising: performing, by one or more computing devices: generating, at a machine learning service, one or more alternate representations of a first set of observation records, wherein at least one alternate representation occupies a different amount of space than the first set of observation records; obtaining, using at least one alternate representation of the one or more alternate representations, a duplication metric corresponding to at least a portion of a second set of observation records, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set, with respect to one or more variables for which respective values are included in at least some observation records of the first set; and in response to determining that the duplication metric meets a threshold criterion, implementing one or more responsive actions. 7. The method as recited in claim 6, wherein a particular alternate representation of the one or more alternate representations includes one or more of: (a) a Bloom filter, (b) a quotient filter, or (c) a skip list. 8. The method as recited in claim 6, wherein the first set of one or more observation records comprises a training data set of a particular machine learning model, and wherein the second set of one or more observation records comprises a test data set of the particular machine learning model. 9. The method as recited in claim 6, wherein a particular alternate representation of the one or more alternate representations includes a Bloom filter, further comprising performing, by the one or more computing devices: estimating, prior to generating the Bloom filter, (a) an approximate count of observation records included in the first set and (b) an approximate size of individual observation records of the first set; and determining, based at least in part on the approximate count or the approximate size, one or more parameters to be used to generate the Bloom filter, including one or more of: (a) a number of bits to be included in the Bloom filter (b) a number of hash functions to be used to generate the Bloom filter, or (c) a particular type of hash function to be used to generate the Bloom filter. 10. The method as recited in claim 6, wherein the one or more response actions include one or more of: (a) notifying a client of a detection of potential duplicate observation records, (b) providing an indication of a particular observation record of the second set which has been identified as having a non-zero probability of being a duplicate, (c) removing, from the second set, a particular observation record which has been identified as having a non-zero probability of being a duplicate, prior to performing a particular machine learning task using the second set, (d) providing, to a client, an indication of a potential prediction error associated with removing, from the second data set, one or more observation records which have been identified as having non-zero probabilities of being duplicates, or (e) abandoning a machine learning job associated with the second set. 11. The method as recited in claim 6, wherein a particular responsive action of the one or more responsive actions comprises providing an indication of a confidence level that a particular observation record of the second set is a duplicate. 12. The method as recited in claim 6, wherein the group of one or more variables excludes an output variable whose value is to be predicted by a machine learning model. 13. The method as recited in claim 6, wherein said determining that the duplication metric meets a threshold criterion comprises one or more of: (a) determining that the number of observation records of the second set which have been identified as having non-zero probabilities of being duplicates exceeds a first threshold or (b) determining that the fraction of the observation records of the second set that have been identified as having non-zero probabilities of being duplicates exceeds a second threshold. 14. The method as recited in claim 6, wherein said generating the one or more alternate representations of the first set of observation records comprises: subdividing the first set of observation records into a plurality of partitions; generating, at respective servers of the machine learning service, a respective Bloom filter corresponding to individual ones of the plurality of partitions; and combining the Bloom filters generated at the respective servers into a consolidated Bloom filter. 15. The method as recited in claim 6, further comprising performing, by the one or more computing devices: receiving, via a programmatic interface, an indication from the client of one or more of (a) a parameter to be used by the machine learning service to determine whether the threshold criterion has been met, or (b) the one or more responsive actions. 16. The method as recited in claim 6, wherein the first set of observation records and the second set of observation records are respective subsets of one of: (a) a training data set of a particular machine learning model, (b) a test data set of a particular machine learning model, or (c) a source data set from which a training data set of a particular machine learning model and a test data set of the particular machine learning model are to be obtained. 17. A non-transitory computer-accessible storage medium storing program instructions that when executed on one or more processors: determine, at a machine learning service, that an analysis to detect whether at least a portion of contents of one or more observation records of a first set of observation records are duplicated in a second set of observation records is to be performed; obtain a duplication metric corresponding to at least a portion of a second set of observation records, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set, with respect to one or more variables for which respective values are included in at least some observation records of the first set; and in response to a determination that the duplication metric meets a threshold criterion, implement one or more responsive actions. 18. The non-transitory computer-accessible storage medium as recited in claim 17, wherein to obtain the alternate metric, the instructions when executed on the one or more processors generate an alternate representation of the first set of observation records, wherein the alternate representation includes one or more of: (a) a Bloom filter, (b) a quotient filter, or (c) a skip list. 19. The non-transitory computer-accessible storage medium as recited in claim 17, wherein the first set of one or more observation records comprises a training data set of a particular machine learning model, and wherein the second set of one or more observation records comprises a test data set of the particular machine learning model. 20. The non-transitory computer-accessible storage medium as recited in claim 17, wherein a particular responsive action of the one or more responsive actions comprises providing an indication of a confidence level that a particular observation record of the second set is a duplicate.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: At a machine learning service, a determination is made that an analysis to detect whether at least a portion of contents of one or more observation records of a first data set are duplicated in a second set of observation records is to be performed. A duplication metric is obtained, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set. In response to determining that the duplication metric meets a threshold criterion, one or more responsive actions are initiated, such as the transmission of a notification to a client of the service.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: At a machine learning service, a determination is made that an analysis to detect whether at least a portion of contents of one or more observation records of a first data set are duplicated in a second set of observation records is to be performed. A duplication metric is obtained, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set. In response to determining that the duplication metric meets a threshold criterion, one or more responsive actions are initiated, such as the transmission of a notification to a client of the service.
Disclosed are various embodiments for data processing using decision tree data structures to implement artificial intelligence in an ingestion process. At least one computing device may be employed to access reference data from a data store accessible to the at least one computing device and parse the reference data using a natural language processor to identify relevant data for storage in at least one decision tree data structure. An ingestion process is applied to receive input data from at least one client device remotely over a transmission network. The at least one decision tree data structure is queried to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process. A first metric for a first child node and a second metric for a second child node are generated using the input data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A non-transitory computer-readable medium having a program executable in at least one computing device, the program comprising program code that, when executed by the at least one computing device, causes the at least one computing device to: access reference data from a data store accessible to the at least one computing device; parse the reference data using a natural language processor to identify relevant data; store the relevant data identified in at least one decision tree data structure; apply an ingestion process to receive input data from at least one client device, the input data being received by the at least one computing device from the client device remotely over a transmission network; query the at least one decision tree data structure to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process; and generate a first metric for a first child node and a second metric for a second child node using the input data, the first child node and the second child node being children of the node identified in the at least one decision tree data structure. 7. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to: identify a binary response from the input data; and apply a receiver operating curve (ROC), wherein the first metric for the first child node and the second metric for the second child node are generated based on the receiver operating curve (ROC). 2. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to identify a subsequent state of the ingestion process based on the node identified in the at least one decision tree data structure. 3. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to: identify an unrecognizable term in the input data; and communicate with a service external to the at least one computing device to define the unrecognizable term through an application programming interface (API) of the service. 4. The non-transitory computer-readable medium of claim 1, wherein the at least one client device comprises a plurality of client devices; and wherein the program further comprises program code that, when executed, causes the at least one computing device to: access input data provided from a first one of the plurality of client devices obtained during the state of the ingestion process; access input data provided from a second one of the plurality of client devices obtained during the state of the ingestion process; and identify that a conflict exists between the input data provided from the first one of the plurality of client devices and the input data provided from the second one of the plurality of client devices. 5. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to determine a confidence metric from the input data, wherein the first metric and the second metric are generated using the confidence metric. 6. The non-transitory computer-readable medium of claim 5, wherein the program further comprises program code that, when executed, causes the at least one computing device to determine a credibility metric associated with the client device from which the input data is received, wherein the confidence metric is determined using the credibility metric. 8. A system, comprising: a server computing device in data communication with at least one client device over a network; program instructions executable by the at least one server computing device that, when executed by the at least one server computing device cause the at least one server computing device to: access reference data from a data store accessible to the at least one computing device; parse the reference data using a natural language processor to identify relevant data; store the relevant data identified in at least one decision tree data structure; apply an ingestion process to receive input data from the at least one client device, the input data being received by the at least one computing device from the client device remotely over a transmission network; query the at least one decision tree data structure to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process; and generate a first metric for a first child node and a second metric for a second child node using the input data, the first child node and the second child node being children nodes of the node identified in the at least one decision tree data structure. 9. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to identify a subsequent state of the ingestion process based on the node identified in the at least one decision tree data structure. 10. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to: identify an unrecognizable term in the input data; and communicate with a service external to the at least one computing device to define the unrecognizable term through an application programming interface (API) of the service. 11. The system of claim 8, wherein the at least one client device comprises a plurality of client devices; and wherein the system further comprises program instructions that, when executed, cause the at least one computing device to: access input data provided from a first one of the plurality of client devices obtained during the state of the ingestion process; access input data provided from a second one of the plurality of client devices obtained during the state of the ingestion process; and identify that a conflict exists between the input data provided from the first one of the plurality of client devices and the input data provided from the second one of the plurality of client devices. 12. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to determine a confidence metric from the input data, wherein the first metric and the second metric are generated using the confidence metric. 13. The system of claim 12, further comprising program instructions that, when executed, cause the at least one computing device to determine a credibility metric associated with the client device from which the input data is received, wherein the confidence metric is determined using the credibility metric. 14. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to: identify a binary response from the input data; and apply a receiver operating curve (ROC), wherein the first metric for the first child node and the second metric for the second child node are generated based on the receiver operating curve (ROC). 15. A computer-implemented method, comprising: accessing, by at least one computing device comprising at least one hardware processor, reference data from a data store accessible to the at least one computing device; parsing, by the at least one computing device, the reference data using a natural language processor to identify relevant data; storing, by the at least one computing device, the relevant data identified in at least one decision tree data structure; applying, by the at least one computing device, an ingestion process to receive input data from at least one client device, the input data being received by the at least one computing device from the client device remotely over a transmission network; querying, by the at least one computing device, the at least one decision tree data structure to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process; and generating, by the at least one computing device, a first metric for a first child node and a second metric for a second child node using the input data, the first child node and the second child node being children nodes of the node identified in the at least one decision tree data structure. 16. The computer-implemented method of claim 15, further comprising identifying, by the at least one computing device, a subsequent state of the ingestion process based on the node identified in the at least one decision tree data structure. 17. The computer-implemented method of claim 15, further comprising: identifying, by the at least one computing device, an unrecognizable term in the input data; and communicating, by the at least one computing device, with a service external to the at least one computing device to define the unrecognizable term through an application programming interface (API) of the service. 18. The computer-implemented method of claim 15, wherein the at least one client device comprises a plurality of client devices; and wherein the computer-implemented method further comprises: accessing, by the at least one computing device, input data provided from a first one of the plurality of client devices obtained during the state of the ingestion process; accessing, by the at least one computing device, input data provided from a second one of the plurality of client devices obtained during the state of the ingestion process; and identifying, by the at least one computing device, that a conflict exists between the input data provided from the first one of the plurality of client devices and the input data provided from the second one of the plurality of client devices. 19. The computer-implemented method of claim 15, further comprising determining, by the at least one computing device, a confidence metric from the input data, wherein the first metric and the second metric are generated using the confidence metric. 20. The computer-implemented method of claim 19, further comprising determining, by the at least one computing device, a credibility metric associated with the client device from which the input data is received, wherein the confidence metric is determined using the credibility metric.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosed are various embodiments for data processing using decision tree data structures to implement artificial intelligence in an ingestion process. At least one computing device may be employed to access reference data from a data store accessible to the at least one computing device and parse the reference data using a natural language processor to identify relevant data for storage in at least one decision tree data structure. An ingestion process is applied to receive input data from at least one client device remotely over a transmission network. The at least one decision tree data structure is queried to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process. A first metric for a first child node and a second metric for a second child node are generated using the input data.
G06N5045
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosed are various embodiments for data processing using decision tree data structures to implement artificial intelligence in an ingestion process. At least one computing device may be employed to access reference data from a data store accessible to the at least one computing device and parse the reference data using a natural language processor to identify relevant data for storage in at least one decision tree data structure. An ingestion process is applied to receive input data from at least one client device remotely over a transmission network. The at least one decision tree data structure is queried to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process. A first metric for a first child node and a second metric for a second child node are generated using the input data.
Provided herein in some embodiments is an artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases including one or more AI-engine modules and one or more server-side client-server interfaces. The one or more AI-engine modules include an instructor module and a learner module configured to train an AI model. An assembly code can be generated from a source code written in a pedagogical programming language describing a mental model of one or more concept modules to be learned by the AI model and curricula of one or more lessons for training the AI model. The one or more server-side client-server interfaces can be configured to enable client interactions from a local client such as submitting the source code for training the AI model and using the trained AI model for one or more predictions.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases, comprising: one or more AI-engine modules including an architect module, an instructor module, and a learner module, wherein the architect module is configured to propose an AI model from an assembly code, and wherein the instructor module and the learner module are configured to train the AI model in one or more training cycles with training data from one or more training data sources, wherein the assembly code is generated from a source code written in a pedagogical programming language, wherein the source code includes a mental model of one or more concept modules to be learned by the AI model using the training data and curricula of one or more lessons for training the AI model on the one or more concept modules, and wherein the AI engine is configured to instantiate a trained AI model based on the one or more concept modules learned by the AI model in the one or more training cycles; and one or more server-side client-server interfaces configured to enable client interactions with the AI engine in one or both client interactions selected from submitting the source code for training the AI model and using the trained AI model for one or more predictions based upon the training data wherein the learner module and the instructor module are configured to pick out the curricula of the one or more lessons, thereby significantly cutting down on training time, memory, and computing cycles used by the AI engine for training the AI model. 2. The AI engine of claim 1, wherein the one or more server-side client-server interfaces are configured to cooperate with one or more client-side client-server interfaces selected from a command-line interface, a graphical interface, a web-based interface, or a combination thereof. 3. The AI engine of claim 1, further comprising a compiler configured to generate the assembly code from the source code; and a training data manager configured to push or pull the training data from one or more training data sources selected from a simulator, a training data generator, a training data database, or a combination thereof. 4. The AI engine of claim 1, wherein the AI engine is configured to operate in a training mode or a predicting mode during the one or more training cycles, wherein, in the training mode, the instructor module and the learner module are configured to i) instantiate the AI model conforming to the AI model proposed by the architect module and ii) train the AI model with the curricula of the one or more lessons, and wherein, in the predicting mode, a predictor AI-engine module is configured to i) instantiate and execute the trained AI model on the training data for the one or more predictions in the predicting mode. 5. The AI engine of claim 1, wherein the AI engine is configured to heuristically pick an appropriate learning algorithm from a plurality of machine learning algorithms in the one or more databases for training the AI model proposed by the architect module. 6. The AI engine of claim 5, wherein the architect module is configured to propose one or more additional AI models, wherein the AI engine is configured to heuristically pick an appropriate learning algorithm from the plurality of machine learning algorithms in the one or more databases for each of the one or more additional AI models, wherein the instructor module and the learner module are configured to train the AI models in parallel, wherein the one or more additional AI models are also trained in one or more training cycles with the training data from one or more training data sources, wherein the AI engine is configured to instantiate one or more additional trained AI models based on the concept modules learned by the one or more AI models in the one or more training cycles, and wherein the AI engine is configured to identify a best trained AI model among the trained AI models. 7. The AI engine of claim 6, further comprising: a trained AI-engine AI model, wherein the trained AI-engine AI model provides enabling AI for proposing the AI models from the assembly code and picking the appropriate learning algorithms from the plurality of machine learning algorithms in the one or more databases for training the AI models, and wherein the AI engine is configured to continuously train the trained AI-engine AI model in providing the enabling AI for proposing the AI models and picking the appropriate learning algorithms. 8. The AI engine of claim 6, further comprising: a meta-learning module configured to keep a record in the one or more databases for i) the source code processed by the AI engine, ii) mental models of the source code, iii) the training data used for training the AI models, iv) the trained AI models, v) how quickly the trained AI models were trained to a sufficient level of accuracy, and vi) how accurate the trained AI models became in making predictions on the training data. 9. The AI engine of claim 1, wherein the AI engine is configured to make determinations regarding i) when to train the AI model on each of the one or more concept modules and ii) how extensively to train the AI model on each of the one or more concept modules, and wherein the determinations are based on the relevance of each of the one or more concept modules in one or more predictions of the trained AI model based upon the training data. 10. The AI engine of claim 1, wherein the AI engine is configured to provide one or more training status updates on training the AI model selected from i) an estimation of a proportion of a training plan completed for the AI model, ii) an estimation of a completion time for completing the training plan, iii) the one or more concept modules upon which the AI model is actively training, iv) mastery of the AI model on learning the one or more concept modules, v) fine-grained accuracy and performance of the AI model on learning the one or more concept modules, and vi) overall accuracy and performance of the AI model on learning one or more mental models. 11. An artificial intelligence (“AI”) system, comprising: one or more remote servers including an AI engine including one or more AI-engine modules including an architect module, an instructor module, and a learner module, wherein the architect module is configured to propose an AI model from an assembly code, and wherein the instructor module and the learner module are configured to train the AI model in one or more training cycles with training data; a compiler configured to generate the assembly code from a source code written in a pedagogical programming language, wherein the source code includes a mental model of one or more concept modules to be learned by the AI model using the training data and curricula of one or more lessons for training the AI model on the one or more concept modules, and wherein the AI engine is configured to instantiate a trained AI model based on the concept modules learned by the AI model in the one or more training cycles; one or more databases; and one or more server-side client-server interfaces configured to enable client interactions with the AI engine; and one or more local clients including a coder for generating the source code written in the pedagogical programming language; and one or more client-side client-server interfaces configured to enable client interactions with the AI engine in one or both client interactions selected from submitting the source code for training the AI model and using the trained AI model for one or more predictions based upon the training data, wherein the one or more client-side client-server interfaces are selected from a command-line interface, a graphical interface, a web-based interface, or a combination thereof, and wherein the AI system includes at least one server-side training data source or at least one client-side training data source. 12. A method for an artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases, comprising: proposing an AI model, wherein the AI engine includes an architect AI-engine module for proposing the AI model from an assembly code; training the AI model, wherein the AI engine includes an instructor AI-engine module and a learner AI-engine module for training the AI model in one or more training cycles with training data from one or more training data sources; compiling the assembly code from a source code, wherein a compiler is configured to generate the assembly code from the source code written in a pedagogical programming language, wherein the source code includes a mental model of one or more concept modules to be learned by the AI model using the training data and curricula of one or more lessons for training the AI model on the one or more concept modules; instantiating a trained AI model, wherein the AI engine is configured for instantiating the trained AI model based on the concept modules learned by the AI model in the one or more training cycles; and enabling client interactions, wherein one or more server-side client-server interfaces are configured for enabling client interactions with the AI engine in one or both client interactions selected from submitting the source code for training the AI model and using the trained AI model for one or more predictions based upon the training data. 13. The method of claim 12, further comprising: pushing or pulling the training data, wherein a training data manager is configured for pushing or pulling the training data from one or more training sources selected from a simulator, a training data generator, a training data database, or a combination thereof. 14. The method of claim 12, further comprising: operating the AI engine in a training mode or a predicting mode during the one or more training cycles, wherein, in the training mode, the instructor module and the learner module are configured to i) instantiate the AI model conforming to the AI model proposed by the architect module and ii) train the AI model, and wherein, in the predicting mode, a predictor AI module is configured to i) instantiate and execute the trained AI model on the training data for the one or more predictions in the predicting mode. 15. The method of claim 12, further comprising: heuristically picking an appropriate learning algorithm, wherein the AI engine is configured for picking the appropriate learning algorithm from a plurality of machine learning algorithms in the one or more databases for training the AI model proposed by the architect module. 16. The method of claim 15, further comprising: proposing one or more additional AI models, wherein the architect module is configured for proposing the one or more additional AI models; heuristically picking an appropriate learning algorithm from the plurality of machine learning algorithms in the one or more databases with the AI engine for each of the one or more additional AI models; training the AI models in parallel with the instructor module and learner module, wherein the one or more additional AI models are also trained in one or more training cycles with the training data from one or more training data sources; instantiating one or more additional trained AI models with the AI engine based on the concept modules learned by the one or more AI models in the one or more training cycles; and identifying a best trained AI model among the trained AI models with the AI engine. 17. The method of claim 16, further comprising: providing enabling AI for proposing the AI models from the assembly code and picking the appropriate learning algorithms from the plurality of machine learning algorithms in the one or more databases for training the AI models; and continuously training a trained AI-engine AI model with the AI engine to provide the enabling AI for proposing the AI models and picking the appropriate learning algorithms. 18. The method of claim 16, further comprising: keeping a record in the one or more databases with a meta-learning module, wherein the record includes i) the source code processed by the AI engine, ii) mental models of the source code, iii) the training data used for training the AI models, iv) the trained AI models, v) how quickly the trained AI models were trained to a sufficient level of accuracy, and vi) how accurate the trained AI models became in making predictions on the training data. 19. The method of claim 12, further comprising: making determinations with the AI engine regarding i) when to train the AI model on each of the one or more concept modules and ii) how extensively to train the AI model on each of the one or more concept modules, wherein the determinations are based on the relevance of each of the one or more concept modules in one or more predictions of the trained AI model based upon the training data. 20. The method of claim 12, further comprising: providing one or more training status updates with the AI engine on training the AI model, wherein the one or more training status updates are selected from i) an estimation of a proportion of a training plan completed for the AI model, ii) an estimation of a completion time for completing the training plan, iii) the one or more concept modules upon which the AI model is actively training, iv) mastery of the AI model on learning the one or more concept modules, v) fine-grained accuracy and performance of the AI model on learning the one or more concept modules, and vi) overall accuracy and performance of the AI model on learning one or more mental models.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Provided herein in some embodiments is an artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases including one or more AI-engine modules and one or more server-side client-server interfaces. The one or more AI-engine modules include an instructor module and a learner module configured to train an AI model. An assembly code can be generated from a source code written in a pedagogical programming language describing a mental model of one or more concept modules to be learned by the AI model and curricula of one or more lessons for training the AI model. The one or more server-side client-server interfaces can be configured to enable client interactions from a local client such as submitting the source code for training the AI model and using the trained AI model for one or more predictions.
G06N30454
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Provided herein in some embodiments is an artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases including one or more AI-engine modules and one or more server-side client-server interfaces. The one or more AI-engine modules include an instructor module and a learner module configured to train an AI model. An assembly code can be generated from a source code written in a pedagogical programming language describing a mental model of one or more concept modules to be learned by the AI model and curricula of one or more lessons for training the AI model. The one or more server-side client-server interfaces can be configured to enable client interactions from a local client such as submitting the source code for training the AI model and using the trained AI model for one or more predictions.
A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A neuromorphic computing system comprising: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, and generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request. 2. The neuromorphic computing system of claim 1, wherein the synapse core is to: transmit (i) a first weighted spike to the first address of the first post-synaptic neuron and (ii) a second weighted spike to the second address of the second post-synaptic neuron. 3. The neuromorphic computing system of claim 1, wherein the synapse core is to generate the first address and the second address by: a finite field mathematical function which is to apply to a first seed number to generate the first address; and the finite field mathematical function which is to apply to a second seed number to generate the second address. 4. The neuromorphic computing system of claim 1, wherein the synapse core is to generate the first address and the second address by: a finite field mathematical function which is to apply to a first seed number to generate the first address and the second address. 5. The neuromorphic computing system of claim 1, wherein the synapse core is to generate the first address by: a finite field mathematical function which is to apply to a seed number to generate least significant bits (LSBs) of the first address; a storage which is to be accessed to retrieve most significant bits (MSBs) of the first address; and the first address which is to generate based on the LSBs of the first address and the MSBs of the first address. 6. The neuromorphic computing system of claim 1, wherein the first post-synaptic neuron is included in a first core of the neuromorphic computing system, and wherein the synapse core is to generate the first address by: a Galois field function which is to apply to a seed number to generate an identification of the first post-synaptic neuron within the first core; a storage which is to be accessed to retrieve an identification of the first core; and the first address which is to be generated based on the identification of the first post-synaptic neuron and the identification of the first core. 7. The neuromorphic computing system of claim 1, wherein the synapse core is to: associate a first weight with a first spike to generate the first weighted spike; and associate a second weight with a second spike to generate the second weighted spike. 8. The neuromorphic computing system of claim 7, further comprising: a memory to store the first weight and the second weight; and one or more registers to store a plurality of seed numbers, wherein the first address and the second address are to be generated based on one or more seed numbers of the plurality of seed numbers. 9. The neuromorphic computing system of claim 8, further comprising: circuitry to update the first weight and the second weight in the memory. 10. A synapse core of a neuromorphic computing system, the synapse core comprising: mapping logic to (i) receive a request, the request comprising an identification of a pre-synaptic neuron that generated the request, (ii) access a seed number based on the identification of the pre-synaptic neuron, and (iii) map the seed number to an identification of a post-synaptic neuron that is included in a first core of the neuromorphic computing system; and a first storage to provide an identification of the first core, wherein the synapse core is to (i) generate an address of the post-synaptic neuron, based at least in part on the identification of the post-synaptic neuron and the identification of the first core, and (ii) transit a spike to the address of the post-synaptic neuron. 11. The synapse core of claim 10, wherein the request is a first request, wherein the seed number is a first seed number, and wherein the mapping logic is to: receive a second request, the second request comprising the identification of the post-synaptic neuron that generated the second request; access a second seed number based on the identification of the post-synaptic neuron; and map the second seed number to the identification of the pre-synaptic neuron. 12. The synapse core of claim 11, wherein: the mapping logic is to (i) map the seed number to the identification of the post-synaptic neuron request using at least in part a first mathematical function, and (ii) map the second seed number to the identification of the pre-synaptic neuron using at least in part a second mathematical function, wherein the second mathematical function is an inverse of the first mathematical function. 13. The synapse core of claim 10, wherein the synapse core is to: associate a weight to the spike, prior to the transmission of the spike to the address of the post-synaptic neuron. 14. The synapse core of claim 11, further comprising: a memory to store the weight. 15. The synapse core of claim 14, further comprising: circuitry to update the weight in the memory, wherein the circuitry comprises: a first circuitry to generate a change in weight; a second circuitry to read an original weight from the memory; a third circuitry to generate an updated weight based on the change in weight and the original weight; and a fourth circuitry to write the updated weight to the memory. 16. The synapse core of claim 10, wherein: the request includes a sparsity number; and the mapping logic is to map the seed number to identifications of a first number of post-synaptic neurons, the first number based on the sparsity number. 17. One or more non-transitory computer-readable storage media to store instructions that, when executed by a processor, cause the processor to: receive a request from a pre-synaptic neuron; generate, in response to the request, an address of a post-synaptic neuron, wherein the address is not stored in an apparatus, which comprises the processor, prior to receiving the request; and transmit a weighted spike to the address of the post-synaptic neuron. 18. The one or more non-transitory computer-readable storage media of claim 17, wherein the instructions, when executed, cause the processor to: apply a finite field mathematical function to a seed number to generate a first section of the address. 19. The one or more non-transitory computer-readable storage media of claim 18, wherein the instructions, when executed, cause the processor to: access a storage to retrieve a second section of the address; and generate the address based on the first section and the second section. 20. The one or more non-transitory computer-readable storage media of claim 17, wherein the instructions, when executed, cause the processor to: apply a Galois field function to a seed number to generate an identification of the post-synaptic neuron, wherein the post-synaptic neuron is included in a core of a neuromorphic computing system; access a storage to retrieve an identification of the core; and generate the address based on the identification of the post-synaptic neuron and the identification of the core. 21. The one or more non-transitory computer-readable storage media of claim 17, wherein the instructions, when executed, cause the processor to: associate a synaptic weight with a spike to generate the weighted spike.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
G06N30635
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
Embodiments of a multimodal question answering (mQA) system are presented to answer a question about the content of an image. In embodiments, the model comprises four components: a Long Short-Term Memory (LSTM) component to extract the question representation; a Convolutional Neural Network (CNN) component to extract the visual representation; an LSTM component for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. A Freestyle Multilingual Image Question Answering (FM-IQA) dataset was constructed to train and evaluate embodiments of the mQA model. The quality of the generated answers of the mQA model on this dataset is evaluated by human judges through a Turing Test.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method that improves computer-user interaction by generating an answer to a question input related to an image input, the method comprising: receiving a question input in a natural language form; receiving an image input related to the question input; and inputting the question input and the image input into a multimodal question answering (mQA) model to generate an answer comprising multiple words generated sequentially, the mQA model comprising: a first component that encodes the question input into a dense vector representation; a second component to extract a visual representation of the image input; a third component to extract representation of a current word in the answer and its linguistic context; and a fourth component to generate a next word after the current word in the answer using a fusion comprising the dense vector representation, the visual representation, and the representation of the current word. 2. The computer-implemented method of claim 1 wherein the first component is a first long short term memory (LSTM) network comprising a first word embedding layer and a first LSTM layer. 3. The computer-implemented method of claim 2 wherein the third component is a second LSTM network comprising a second word embedding layer and a second LSTM layer. 4. The computer-implemented method of claim 3 wherein the first word-embedding layer shares a weight matrix with the second word-embedding layer. 5. The computer-implemented method of claim 3 wherein the first LSTM layer does not share a weight matrix with the second LSTM layer. 6. The computer-implemented method of claim 1 wherein the second component is a deep Convolutional Neural network (CNN). 7. The computer-implemented method of claim 1 wherein the CNN is pre-trained and is fixed during question answering training. 8. The computer-implemented method of claim 1 wherein the first, the third, and the fourth components are jointly trained together. 9. The computer-implemented method of claim 3 wherein the fourth component is a fusing component comprising: a fusing layer that fuses information from the first LSTM layer, the second LSTM layer, and the second component to generate a dense multimodal representation for the current word in the answer; an intermediate layer that maps the dense multimodal representation in the fusing layer to a dense word representation; and a softmax layer that predicts a probability distribution of the next word in the answer. 10. A computer-implemented method that improves computer-user interaction by generating an answer to a question input related to an image input, the method comprising: extracting a semantic meaning of a question input using a first long short term memory (LSTM) component comprising a first word-embedding layer and a first LSTM layer; generating a representation of an image input related to the question input using a deep Convolutional Neural network (CNN) component; extracting a representation of a current word of the answer using a second LSTM component comprising a second word-embedding layer and a second LSTM layer; and fusing the semantic meaning, the representation of the image input, and a representation of the current word of the answer to predict a next word of the answer. 11. The computer-implemented method of claim 10 wherein the first word-embedding layer shares a weight matrix with the second word-embedding layer. 12. The computer-implemented method of claim 10 wherein the first LSTM layer does not share a weight matrix with the second LSTM layer. 13. The computer-implemented method of claim 10 wherein the deep CNN is pre-trained and is fixed during question answering training. 14. The computer-implemented method of claim 11 wherein predicting the next word in the answer further comprises: fusing information from the first LSTM layer, the second LSTM layer, and the CNN in a fusion layer to generate a dense multimodal representation for the current answer word; mapping in an intermediate layer the dense multimodal representation to a dense word representation; and predicting in a softmax layer a probability distribution of the next word in the answer. 15. The computer-implemented method of claim 14 wherein the dense multimodal representation in the fusion layer is a non-linear activation function. 16. The computer-implemented method of claim 15 wherein the non-linear activation function is a scaled hyperbolic tangent function. 17. The computer-implemented method of claim 15 wherein the first word-embedding layer, the second word-embedding layer, and the softmax layer share a weight matrix. 18. A non-transitory computer-readable medium or media comprising one or more sequences of instructions which, when executed by one or more processors, causes the steps to be performed comprising: responsive to receiving from a user a question input, extracting a semantic meaning of the question input; responsive to receiving an image input related to the question input, generating a representation of the image input; starting with a start sign as a current word in an answer to the question input based upon the image input, generating a next answer word based on a fusion of the semantic meaning, the representation of the image input, and a semantic current answer word and adding it to the answer; repeating the next answer word generating step until an end sign of the answer is generated; and responsive to obtaining an end sign, outputting the answer. 19. The non-transitory computer-readable medium or media of claim 18 wherein generating the next answer word comprises: fusing information from the first LSTM layer, the second LSTM layer, and the CNN in a fusion layer for a dense multimodal representation for the current answer word; mapping in an intermediate layer the dense multimodal representation to a dense word representation; and predicting in a softmax layer a probability distribution of the next word in the answer. 20. The non-transitory computer-readable medium or media of claim 19 wherein the softmax layer has a weight matrix to decode the dense word representation into a pseudo one-word representation.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments of a multimodal question answering (mQA) system are presented to answer a question about the content of an image. In embodiments, the model comprises four components: a Long Short-Term Memory (LSTM) component to extract the question representation; a Convolutional Neural Network (CNN) component to extract the visual representation; an LSTM component for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. A Freestyle Multilingual Image Question Answering (FM-IQA) dataset was constructed to train and evaluate embodiments of the mQA model. The quality of the generated answers of the mQA model on this dataset is evaluated by human judges through a Turing Test.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments of a multimodal question answering (mQA) system are presented to answer a question about the content of an image. In embodiments, the model comprises four components: a Long Short-Term Memory (LSTM) component to extract the question representation; a Convolutional Neural Network (CNN) component to extract the visual representation; an LSTM component for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. A Freestyle Multilingual Image Question Answering (FM-IQA) dataset was constructed to train and evaluate embodiments of the mQA model. The quality of the generated answers of the mQA model on this dataset is evaluated by human judges through a Turing Test.
Embodiments relate to determining likelihood of presence of anomaly in a target system based on the accuracy of the predictions. A predictive model makes predictions based at least on the input data from the target system that change over time. The accuracy of the predictions over time is determined by comparing actual values against predictions for these actual values. The accuracy of the predictions is analyzed to generate an anomaly model indicating anticipated changes in the accuracy of predictions made by the predictive model. When the accuracy of subsequent predictions does not match the range or distribution as anticipated by the anomaly model, a determination can be made that the target system is likely in an anomalous state.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of detecting anomaly in a target system, comprising: receiving input data associated with the target system; generating a prediction by executing one or more predictive algorithms based on the received input data; generating a current accuracy score representing accuracy of the prediction made by the predictive algorithm; and determining an anomaly score representing likelihood that the target system is in an anomalous state based on the current or one or more recent accuracy scores by referencing an anomaly model representing an anticipated range, or distribution, of accuracy scores made by the predictive model. 2. The method of claim 1, further comprising comparing the prediction with an actual value corresponding to the prediction to generate the current accuracy score. 3. The method of claim 1, further comprising generating the anomaly model by analyzing a plurality of prior accuracy scores generated prior to generating of the current accuracy score, the prior accuracy scores generated by executing the predictive algorithm based on training data or prior input data and comparing the plurality of predictions against a plurality of corresponding actual values. 4. The method of claim 1, wherein the accuracy score takes one of a plurality of discrete values, and the likelihood is determined by computing a difference in cumulative distribution function (CDF) values at an upper end and a lower end of one of the plurality of discrete values. 5. The method of claim 1, wherein determining the likelihood comprises: computing a running average of the current accuracy score and prior accuracy scores preceding the current accuracy score; and determining the anomaly score by identifying an output value of the anomaly model corresponding to the running average. 6. The method of claim 5, wherein a number of the prior accuracy scores for computing the running average is dynamically changed based on predictability of the input data. 7. The method of claim 1, further comprising aggregating the accuracy score with one or more prior accuracy scores generated using the input data at time steps prior to a current time step for computing the current accuracy score. 8. The method of claim 7, further comprising receiving a user input indicating a time period represented by the aggregated accuracy score. 9. The method of claim 8, further comprising increasing or decreasing a time period represented by the aggregated accuracy score responsive to receiving another user input. 10. The method of claim 1, wherein the predictive algorithm generates the prediction using a hierarchical temporal memory (HTM) or a cortical learning algorithm. 11. The method of claim 1, further comprising generating a plurality of predictions including the prediction and a corresponding plurality of current accuracy scores based on the same input data, each of the plurality of predictions associated with a different parameter of the target system, the likelihood that the target system is in the anomalous state is determined based on a combined accuracy score that combines the plurality of current accuracy scores. 12. The method of claim 1, further comprising generating a plurality of predictions including the prediction and a corresponding plurality of current accuracy scores based on the same input data and associated with different parameters of the target system, the likelihood that the target system is in the anomalous state is determined based on a change in correlation of at least two of the plurality of current accuracy scores. 13. An anomaly detector for detecting an anomalous state in a target system, comprising: a processor; a data interface configured to receive input data associated with the target system; a predictive algorithm module configure to: generate a prediction by executing one or more predictive algorithms based on the received input data, and generate a current accuracy score representing accuracy of the prediction; and an anomaly processor configured to determine an anomaly score representing likelihood that the target system is in an anomalous state based on the current accuracy score by referencing an anomaly model representing an anticipated range, or distribution of accuracy of predictions made by the predictive model. 14. The anomaly detector of claim 13, wherein the predictive algorithm module is further configured to compare the prediction with an actual value corresponding to the prediction to generate the current accuracy score. 15. The anomaly detector of claim 13, wherein the anomaly processor is configured to generate the anomaly model by analyzing a plurality of prior accuracy scores generated prior to generating the current accuracy score, the prior accuracy scores generated by executing the predictive algorithm based on training data or prior input data provided to the predictive and comparing the plurality of predictions against a plurality of corresponding actual values. 16. The anomaly detector of claim 13, wherein the accuracy score takes one of a plurality of discrete values, and the anomaly processor is configured to determine the likelihood by computing a difference in cumulative distribution function (CDF) values at an upper end and a lower end of one of the plurality of discrete values. 17. The anomaly detector of claim 13, wherein the anomaly processor is further configured to: compute a running average of the current accuracy score and prior accuracy scores preceding the current accuracy score; and determine the accuracy score by identifying an output value of the anomaly model corresponding to the running average. 18. The anomaly detector of claim 17, wherein a number of the prior accuracy scores for computing the running average is dynamically changed based on predictability of the input data. 19. The anomaly detector of claim 13, further comprising a statistics module configured to aggregate the accuracy score with one or more prior accuracy scores generated using the input data at time steps prior to a current time step for computing the current accuracy score. 20. A non-transitory computer readable storage medium storing instructions thereon, the instructions when executed by a processor causing the processor to: receive input data associated with the target system; generate a prediction by executing one or more predictive algorithms based on the received input data; generate a current accuracy score representing accuracy of the prediction made by the predictive algorithm; and determine an anomaly score representing likelihood that the target system is in an anomalous state based on the current accuracy score by referencing an anomaly model representing an anticipated range, or distribution in accuracy of predictions made by the predictive model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments relate to determining likelihood of presence of anomaly in a target system based on the accuracy of the predictions. A predictive model makes predictions based at least on the input data from the target system that change over time. The accuracy of the predictions over time is determined by comparing actual values against predictions for these actual values. The accuracy of the predictions is analyzed to generate an anomaly model indicating anticipated changes in the accuracy of predictions made by the predictive model. When the accuracy of subsequent predictions does not match the range or distribution as anticipated by the anomaly model, a determination can be made that the target system is likely in an anomalous state.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments relate to determining likelihood of presence of anomaly in a target system based on the accuracy of the predictions. A predictive model makes predictions based at least on the input data from the target system that change over time. The accuracy of the predictions over time is determined by comparing actual values against predictions for these actual values. The accuracy of the predictions is analyzed to generate an anomaly model indicating anticipated changes in the accuracy of predictions made by the predictive model. When the accuracy of subsequent predictions does not match the range or distribution as anticipated by the anomaly model, a determination can be made that the target system is likely in an anomalous state.
An exemplary system, method and computer-accessible medium for generating a model(s), can include, for example, receiving first information related to raw data, generating second information by formatting the first information, generating third information related to a feature set(s) of the second information, generating the model(s) based on the second and third information. Fourth information related to a user-defined regularization of the second information can be received, fifth information can be generated based on a reformatting of the second information using the fourth information. A prediction(s) can be generated based on the model(s). The prediction(s) can be generated based on a time horizon(s).
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A non-transitory computer-accessible medium having stored thereon computer-executable instructions for generating at least one model, wherein, when a computer arrangement executes the instructions, the computer arrangement is configured to perform procedures, comprising: receiving first information related to raw data; generating second information by formatting the first information; generating third information related to at least one feature set of the second information; and generating the at least one model based on the second and third information. 2. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to (i) receive fourth information related to a user-defined regularization of the second information, and (ii) generate fifth information based on a reformatting of the second information using the fourth information. 3. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to generate at least one prediction based on the at least one model. 4. The computer-accessible medium of claim 3, wherein the computer arrangement is further configured to generate the at least one prediction based on at least one time horizon. 5. The computer-accessible medium of claim 3, wherein the computer arrangement is further configured to determine fourth information related to a potential information value for the second information based on the third information. 6. The computer-accessible medium of claim 5, wherein the computer arrangement is further configured to determine the fourth information using at least one of a simple regression procedure or a correlation analysis. 7. The computer-accessible medium of claim 3, wherein the second information includes a plurality of discrete data columns, and wherein computer arrangement is further configured to generate a plurality of equations based on a plurality of combinations of a set of data columns of the data columns. 8. The computer-accessible medium of claim 5, wherein the second information includes a plurality of discrete data columns, and wherein the computer arrangement is further configured to determine fifth information related to how a first data column of the data columns is linked with at least one further data column of the data columns. 9. The computer-accessible medium of claim 8, wherein the computer arrangement is further configured to generate the third information based on the second information. 10. The computer-accessible medium of claim 9, wherein the computer arrangement is further configured to assign a score to each set of the feature sets based on a correlation of each respective one of the feature sets to the at least one prediction. 11. The computer-accessible medium of claim 10, wherein the computer arrangement is further configured to select a particular feature set based on the score. 12. The computer-accessible medium of claim 1, wherein the computer arrangement is configured to generate the at least one model using an islanding procedure based on the first information and the second information. 13. The computer-accessible medium of claim 12, wherein the islanding procedure includes generating, using the computer arrangement, a plurality of subsets of the second information. 14. The computer-accessible medium of claim 13, wherein the islanding procedure further includes assigning one or more species to each subset of the subsets using the computer arrangement. 15. The computer-accessible medium of claim 14, wherein the computer arrangement is configured to assign the one or more species based on a performance of each subset. 16. The computer-accessible medium of claim 15, wherein the performance includes a comparison of each subset relative to its historical performance. 17. The computer-accessible medium of claim 1, wherein the computer arrangement is configured to generate the at least one model using at least one neural network. 18. The computer-accessible medium of claim 17, wherein the at least one neural network is at least one evolutionary neural network. 19. The computer-accessible medium of claim 18, wherein the at least one model is at least one genomic model, and the at least one evolutionary neural network is at least one evolutionary neural network with at least one of at least one mutation or at least one recombination. 20. The computer-accessible medium of claim 19, wherein the at least one of the at least one mutation or the at least one recombination includes at least one rate that is tunable using at least one hyperparameter. 21. A method for generating at least one model, comprising: receiving first information related to raw data; generating second information by formatting the first information; generating third information related to at least one feature set of the second information; and using a computer hardware arrangement, generating the at least one model based on the second and third information. 22-40. (canceled) 41. A system for generating at least one model, comprising: at least one computer hardware arrangement configured to: receive first information related to raw data; generate second information by formatting the first information; generate third information related to at least one feature set of the second information; and generate the at least one model based on the second and third information. 42-60. (canceled)
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An exemplary system, method and computer-accessible medium for generating a model(s), can include, for example, receiving first information related to raw data, generating second information by formatting the first information, generating third information related to a feature set(s) of the second information, generating the model(s) based on the second and third information. Fourth information related to a user-defined regularization of the second information can be received, fifth information can be generated based on a reformatting of the second information using the fourth information. A prediction(s) can be generated based on the model(s). The prediction(s) can be generated based on a time horizon(s).
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An exemplary system, method and computer-accessible medium for generating a model(s), can include, for example, receiving first information related to raw data, generating second information by formatting the first information, generating third information related to a feature set(s) of the second information, generating the model(s) based on the second and third information. Fourth information related to a user-defined regularization of the second information can be received, fifth information can be generated based on a reformatting of the second information using the fourth information. A prediction(s) can be generated based on the model(s). The prediction(s) can be generated based on a time horizon(s).
A fabric selection tool provides an automated procedure for recommending and/or selecting a fabric for a window treatment to be installed in a building. The recommendation may be made to optimize the performance of the window treatment in which the fabric may be installed. The recommended fabric may be selected based on performance metrics associated with each fabric in an environment. The fabrics may be ranked based upon the performance metrics of one or more of the fabrics. One or more of the fabrics, and/or their corresponding ranks, may be displayed to a user for selection. The recommended fabrics may be determined based on combinations of fabrics that provide performance metrics for various façades of the building. Using the ranking system provided by the fabric selection tool, the user may obtain a fabric sample and/or order one or more of the recommended fabrics.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for recommending a window treatment fabric, the method comprising: determining at least one position of window treatments that are controlled by an automated window treatment control system, wherein the at least one position of the window treatments cause at least a portion of at least one window of an interior space to be covered by the window treatments within at least one calendar day, wherein the at least one position is determined during at least two different time frames within the at least one calendar day; and presenting a recommendation to a user for at least one fabric of the window treatments to be used for at least one window, wherein the recommendation is based on the determined at least one position of the window treatments that are controlled by the automated window treatment control system. 2. The method of claim 1, wherein the determined at least one position of the window treatments is determined based on automated window treatment control information. 3. The method of claim 2, wherein the automated window treatment control information includes an angle of the sun, sensor information, an amount of cloud cover, or weather data. 4. The method of claim 2, wherein the automated window control system is configured to adjust the positions of the window treatments in response to at least one light intensity measured by a sensor. 5. The method of claim 2, wherein the automated window control system is configured to adjust the positions of the window treatments at intervals to minimize occupant distractions. 6. The method of claim 2, wherein the determined at least one position of the window treatments is determined based on a calculated angle of the sun to limit a sunlight penetration distance in a space of a building. 7. The method of claim 1, further comprising: computing at least one associated score for the at least one fabric of the window treatment based on at least one predicted performance metric, wherein the recommendation is based on the at least one associated score for the at least one fabric of the window treatment. 8. The method of claim 7, wherein the at least one associated score comprises at least one of a glare score that indicates a predicted amount of glare resulting in a building from use of the at least one fabric in the window treatment, a view score that indicates an occupant's predicted amount of view out of the at least one window when the window treatment is installed, or a daylight score that indicates a predicted amount of daylight resulting in the interior space from use of the fabric in the window treatment. 9. The method of claim 1, wherein the interior space is within a building, and wherein the automated window treatment control system determines the at least one position of the at least one window of the interior space to be covered by the window treatments based at least on astrological information about the building. 10. The method of claim 1, wherein the automated window treatment control system determines the at least one position of the at least one window treatment to affect at least one predicted performance metric associated with daylight entering the interior space through the at least one window, wherein the at least one predicted performance metric comprises at least one of a daylight glare probability, a spatial daylight autonomy, or a view preservation, and wherein the daylight glare probability indicates a maximum daylight glare intensity over a period of time, the spatial daylight autonomy indicates an amount of floor space in a building where daylight alone may provide light over a period of time, and view preservation indicates an amount of the at least one window that may be unobstructed by the window treatment. 11. The method of claim 1, wherein at least one predicted performance metric is received from a fabric performance engine that calculates the at least one predicted performance metric based on environmental characteristics associated with a building in which the at least one fabric of the window treatments will be installed and fabric data associated with the at least one fabric. 12. The method of claim 1, wherein the window treatment comprises at least one of a manual window treatment or an automated window treatment, and wherein at least one predicted performance metric is based on a predicted performance when installed in the at least one of the manual window treatment or the automated window treatment. 13. The method of claim 12, wherein ranking the plurality of fabrics further comprises ranking the plurality of fabrics based on the predicted performance metrics when the fabric is installed in the at least one of the manual window treatment or the automated window treatment. 14. The method of claim 12, further comprising: comparing the at least one predicted performance metric of each fabric of the plurality of fabrics when installed in the automated window treatment to the at least one predicted performance metric of the fabric when installed in the manual window treatment; and displaying the at least one performance metric of the recommended fabric when installed in the automated window treatment and the at least one performance metric of the recommended fabric when installed in the manual window treatment on a visual display. 15. A motorized window treatment configured to be mounted adjacent to a window of an interior space, the motorized window treatment comprising: a motor drive unit responsive to an automated control system; and a window treatment configured to be installed on or around the window in such a way that the motor drive unit is configured to adjust a position of the window treatment in response to the automated control system, the window treatment selected by determining at least one position of the window treatment as controlled by the automated control system to cause at least a portion of the window to be covered by the window treatment during at least two different time frames within the at least one calendar day, and presenting a recommendation to a user for at least one fabric of the window treatment to be used for the window, where the recommendation is based on the determined at least one position of the window treatment as controlled by the automated control system. 16. The motorized window treatment of claim 15, wherein the motor drive unit is configured to receive a digital message and to adjust the position of the window treatment in response to the received digital message. 17. The motorized window treatment of claim 16, wherein the motor drive unit is configured to adjust the position of the window treatment in response to at least one light intensity measured by a sensor. 18. The motorized window treatment of claim 16, wherein the motor drive unit is configured to adjust the position of the window treatment in response to the digital message at intervals to minimize occupant distractions. 19. The motorized window treatment of claim 16, wherein the motor drive unit is configured to adjust the position of the window treatment in response to the digital message to limit a sunlight penetration distance in a space in which the window is located.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A fabric selection tool provides an automated procedure for recommending and/or selecting a fabric for a window treatment to be installed in a building. The recommendation may be made to optimize the performance of the window treatment in which the fabric may be installed. The recommended fabric may be selected based on performance metrics associated with each fabric in an environment. The fabrics may be ranked based upon the performance metrics of one or more of the fabrics. One or more of the fabrics, and/or their corresponding ranks, may be displayed to a user for selection. The recommended fabrics may be determined based on combinations of fabrics that provide performance metrics for various façades of the building. Using the ranking system provided by the fabric selection tool, the user may obtain a fabric sample and/or order one or more of the recommended fabrics.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A fabric selection tool provides an automated procedure for recommending and/or selecting a fabric for a window treatment to be installed in a building. The recommendation may be made to optimize the performance of the window treatment in which the fabric may be installed. The recommended fabric may be selected based on performance metrics associated with each fabric in an environment. The fabrics may be ranked based upon the performance metrics of one or more of the fabrics. One or more of the fabrics, and/or their corresponding ranks, may be displayed to a user for selection. The recommended fabrics may be determined based on combinations of fabrics that provide performance metrics for various façades of the building. Using the ranking system provided by the fabric selection tool, the user may obtain a fabric sample and/or order one or more of the recommended fabrics.
The present invention provides systems and methods for data storage. A hierarchical storage management architecture is presented to facilitate data management. The disclosed system provides methods for evaluating the state of stored data relative to enterprise needs by using weighted parameters that may be user defined. Also disclosed are systems and methods evaluating costing and risk management associated with stored data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. (canceled) 2. A method to predict storage use in a computer network, the method comprising: directing with a first storage component comprising computer hardware, a second storage component to copy at least a portion of primary data stored on at least a first storage resource to secondary data stored on at least a second storage resource, wherein the first and second storage components are arranged in a hierarchy, and wherein the second storage component identifies the second storage resource with a first identifier; sending from the second storage component to the first storage component at least the first identifier associated with the second storage resource; directing with a first storage component comprising computer hardware, a third storage component to monitor usage associated with the second storage resource, wherein the first and third storage components are arranged in a hierarchy, and wherein the third storage component identifies the second storage resource with a second identifier; sending from the third storage component to the first storage component, the second identifier associated with the second storage resource, and usage data derived at least in part from the monitoring of the usage associated with the second storage resource; determining with the first storage component that the first identifier and the second identifier are associated with the second storage resource; and predicting, based at least in part on the usage data and with a computing device comprising computer hardware, one or more of future storage media use, future media growth, future network bandwidth use, and future media agent use. 3. The method of claim 2 further comprising calculating a storage cost for the second storage resource based on the usage data. 4. The method of claim 2 further comprising apportioning a storage cost the second storage resource among a plurality of departments based on the usage data. 5. The method of claim 2 wherein said predicting comprises calculating a moving average of a certain network operation during the time period 6. The method of claim 2 wherein said predicting comprises calculating a moving average. 7. The method of claim 2 wherein said predicting comprises calculating a seasonal index. 8. The method of claim 2 wherein said predicting comprises calculating an average index for each day in a monitored time period 9. The method of claim 2 wherein said predicting comprises performing a linear interpolation on a moving average. 10. The method of claim 2 the second storage resource is identified with a first name by the second storage component, and identified with second name by the third storage component that is different than the first name. 11. The method of claim 2 wherein said determining that the first identifier and the second identifier are associated with the second storage resource is based on network identifiers. 12. A system configured to predict storage use in a computer network, the system comprising: a first storage component comprising computer hardware, the first storage component configured to direct a second storage component to copy at least a portion of primary data stored on at least a first storage resource to secondary data stored on at least a second storage resource, wherein the first and second storage components are arranged in a hierarchy, and wherein the second storage component identifies the second storage resource with a first identifier; wherein the second storage component configured to send to the first storage component, at least the first identifier associated with the second storage resource; wherein the first storage component configured to direct a third storage component to monitor usage associated with the second storage resource, wherein the first and third storage components are arranged in a hierarchy, and wherein the third storage component identifies the second storage resource with a second identifier; wherein the third storage component configured to send to the first storage component, the second identifier associated with the second storage resource, and usage data derived at least in part from the monitoring of the usage associated with the second storage resource; wherein the first storage component is further configured to determine that the first identifier and the second identifier are associated with the second storage resource; and wherein the first storage component is further configured to, based at least in part on the usage data, predict one or more of future storage media use, future media growth, future network bandwidth use, and future media agent use. 13. The system of claim 12 wherein the first storage component is further configured to calculate a storage cost for the second storage resource based on the usage data. 14. The system of claim 12 the first storage component is further configured to apportion a storage cost the second storage resource among a plurality of departments based on the usage data. 15. The system of claim 12 wherein the first storage component is configured to perform the prediction at least in part by calculating a moving average of a certain network operation during the time period 16. The system of claim 12 wherein the first storage component is configured to perform the prediction based at least in part on the calculation of a moving average. 17. The system of claim 12 wherein the first storage component is configured to perform the prediction based at least in part on the calculation a seasonal index. 18. The system of claim 12 wherein the first storage component is further to perform the prediction based at least in part on the calculation of an average index for each day in the time period. 19. The system of claim 12 wherein the second storage component identifies the second storage resource with a first name, and the third storage component identifies the second storage resource with a second name that is different than the first name. 20. The system of claim 12 wherein the first storage component is further configured to determine that the first identifier and the second identifier are associated with the second storage resource based on network identifiers.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: The present invention provides systems and methods for data storage. A hierarchical storage management architecture is presented to facilitate data management. The disclosed system provides methods for evaluating the state of stored data relative to enterprise needs by using weighted parameters that may be user defined. Also disclosed are systems and methods evaluating costing and risk management associated with stored data.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The present invention provides systems and methods for data storage. A hierarchical storage management architecture is presented to facilitate data management. The disclosed system provides methods for evaluating the state of stored data relative to enterprise needs by using weighted parameters that may be user defined. Also disclosed are systems and methods evaluating costing and risk management associated with stored data.
Contextual adaptation of documents automatically replaces words for synonyms that appear within context or topic whey they are being used. A machine learned topic modeling, trained by a set of documents representative of a target user is executed to determine topics of an input document, and to determine words in the document to replace based on determining the relevance of the words to the topics in the documents. An output document is generated based on the input document with the replaced words.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for contextual text adaptation, comprising: one or more hardware processors; a topic model algorithm executable on one or more of the hardware processors, the topic model algorithm generated by machine learning based on a corpus of documents at least related to context of a target user, the topic model comprising a first function that predicts probability distribution of a plurality of topics in a given document, and a second function that predicts probability of a given word occurring in a document associated with a given topic, one or more of the hardware processors operable to receive an input document, one or more of the hardware processors further operable to determine input document topics associated with the input document and a normalized weight associated with each of the input document topics by executing the first function, one or more of the hardware processors further operable to determine an aggregate probability indicating relevance of an input document word to the input document topics based on executing the second function, one or more of the hardware processors further operable to determine a synonym of the input document word based on a dictionary of synonyms, one or more of the hardware processors further operable to determine an aggregate probability for the synonym based on executing the second function, one or more of the hardware processors further operable to compare the aggregate probability for the synonym and the aggregate probability for the input document word, and responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, one or more of the hardware processors further operable to replace the input document word with the synonym, one or more of the hardware processors further operable to generate an output document comprising content of the input document with replaced word. 2. The system of claim 1, wherein one or more of the hardware processors communicate with a social media server to retrieve the corpus of documents. 3. The system of claim 1, wherein the corpus of documents comprises web postings the target user accesses on the social media server. 4. The system of claim 1, wherein the social media server presents the output document on a web page associated with the social media server. 5. The system of claim 1, wherein one or more of the processors determines the aggregate probability indicating relevance of an input document word to the input document topics, determines the aggregate probability for the synonym, compares the aggregate probability for the synonym and the aggregate probability for the input document word, and replaces the input document word with the synonym responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, for each of a plurality of input document words in the input document. 6. The system of claim 1, wherein the aggregate probability for the input document word is determined as a sum of products of the probability that the input document word is associated with an input document topic and the normalized weight of the input document topic. 7. The system of claim 1, wherein multiple synonyms are determined for the input document word and the aggregate probability is determined for each of the multiple synonyms, wherein the synonym with maximum aggregate probability among the multiple synonyms is selected for the comparing with the aggregate probability for the input document word. 8. A computer-implemented method of contextual text adaptation, the method performed by one or more hardware processors, comprising: receiving a corpus of documents in context of a target user; receiving a dictionary of synonyms; generating a topic model algorithm based on at least the corpus of documents by machine learning, the topic model algorithm comprising a first function that predicts probability distribution of a plurality of topics in a given document, and a second function that predicts probability of a given word occurring in a document associated with a given topic; receiving an input document; determining input document topics associated with the input document and a normalized weight associated with each of the input document topics by executing the first function; determining an aggregate probability indicating relevance of an input document word to the input document topics based on executing the second function; determining a synonym of the input document word based on the dictionary of synonyms; determining an aggregate probability for the synonym based on executing the second function; comparing the aggregate probability for the synonym and the aggregate probability for the input document word; and responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, replacing the input document word with the synonym; and generating an output document comprising content of the input document with replaced word. 9. The method of claim 8, wherein the determining of an aggregate probability indicating relevance of an input document word to the input document, the determining of an aggregate probability for the synonym, the comparing of the aggregate probability for the synonym and the aggregate probability for the input document word, and the replacing of the input document word with the synonym responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, is performed for each of a plurality of input document words in the input document. 10. The method of claim 8, wherein the aggregate probability for the input document word is determined as a sum of products of the probability that the input document word is associated with an input document topic and the normalized weight of the input document topic. 11. The method of claim 8, wherein multiple synonyms are determined for the input document word and the aggregate probability is determined for each of the multiple synonyms, wherein the synonym with maximum aggregate probability among the multiple synonyms is selected for the comparing with the aggregate probability for the input document word. 12. The method of claim 8, wherein the corpus of documents are received over a communication network from a social media server. 13. The method of claim 8, wherein the corpus of documents comprises web postings the target user accesses. 14. A computer readable storage medium storing a program of instructions executable by a machine to perform a method of contextual text adaptation, the method comprising: identifying a target user; receiving a corpus of documents in context of the target user; receiving a dictionary of synonyms; generating a topic model algorithm based on at least the corpus of documents by machine learning, the topic model algorithm comprising a first function that predicts probability distribution of a plurality of topics in a given document, and a second function that predicts probability of a given word occurring in a document associated with a given topic; and receiving an input document; determining input document topics associated with the input document and a normalized weight associated with each of the input document topics by executing the first function; determining a probability that an input document word is associated with an input document topic for each of the input document topics by executing the second function; determining an aggregate probability for the input document word as a sum of products of the probability that an input document word is associated with an input document topic and the normalized weight of the input document topic; determining a synonym of the input document word based on the dictionary of synonyms; determining an aggregate probability for the synonym; comparing the aggregate probability for the synonym and the aggregate probability for the input document word; responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, replacing the input document word with the synonym; and generating an output document comprising content of the input document with replaced word. 15. The computer readable storage medium of claim 14, wherein the aggregate probability for the input document word is determined as a sum of products of the probability that the input document word is associated with an input document topic and the normalized weight of the input document topic. 16. The computer readable storage medium of claim 14, wherein multiple synonyms are determined for the input document word and the aggregate probability is determined for each of the multiple synonyms, wherein the synonym with maximum aggregate probability among the multiple synonyms is selected for the comparing with the aggregate probability for the input document word. 17. The computer readable storage medium of claim 14, wherein the corpus of documents are received over a communication network from a social media server. 18. The computer readable storage medium of claim 14, wherein the corpus of documents comprises web postings the target user accesses.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Contextual adaptation of documents automatically replaces words for synonyms that appear within context or topic whey they are being used. A machine learned topic modeling, trained by a set of documents representative of a target user is executed to determine topics of an input document, and to determine words in the document to replace based on determining the relevance of the words to the topics in the documents. An output document is generated based on the input document with the replaced words.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Contextual adaptation of documents automatically replaces words for synonyms that appear within context or topic whey they are being used. A machine learned topic modeling, trained by a set of documents representative of a target user is executed to determine topics of an input document, and to determine words in the document to replace based on determining the relevance of the words to the topics in the documents. An output document is generated based on the input document with the replaced words.
A system for communicating postsynaptic neuron states. The system includes a first neuromorphic core and a second neuromorphic core. The first neuromorphic core includes a first array of synaptic memory cells and postsynaptic neuron circuits. Each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells. Each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold. The second neuromorphic core includes a second array of synaptic memory cells. A neuron bus is configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic core.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for communicating postsynaptic neuron states to a neuromorphic core, the method comprising: storing, in a first transmit buffer, indications of a postsynaptic neuron circuit fire from postsynaptic neuron circuits in a first neuromorphic core, each of the postsynaptic neuron circuits being coupled to a plurality of synaptic memory cells, the indications of postsynaptic neuron circuit fire identifying which of the postsynaptic neuron circuits fired; serially shifting the indications of postsynaptic neuron circuit fire to a neuron bus; receiving the indications of postsynaptic neuron circuit fire from the neuron bus at a plurality of presynaptic neuron circuits in a second neuromorphic core. 2. The method of claim 1, further comprising: updating the indications of postsynaptic neuron circuit fire at the first transmitter buffer when one or more of the postsynaptic neuron circuits at the first neuromorphic core fires; determining if a second transmit buffer at the first neuromorphic core is serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus; transmitting the indications of the postsynaptic neuron circuit fire from the first transmit buffer to the second transmit buffer when the second transmit buffer is not serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first transmit buffer; clearing the indications of the postsynaptic neuron circuit fire at the first transmit buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second transmit buffer. 3. The method of claim 1, further comprising serially shifting the indications of the postsynaptic neuron circuit fire from the neuron bus to a first receive buffer at the second neuromorphic core. 4. The method of claim 3, further comprising: determining if the first receive buffer is serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus; transmitting the indications of the postsynaptic neuron circuit fire from the first receive buffer to a second receive buffer at the second neuromorphic core when the first receive buffer is not serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first receive buffer; and clearing the indications of the postsynaptic neuron circuit fire at the first receive buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second receive buffer. 5. The method of claim 1, further comprising firing one or more presynaptic neuron circuits at the second neuromorphic core after receiving the indications of postsynaptic neuron circuit fire by the first receive buffer from the neuron bus. 6. A system for communicating postsynaptic neuron states, the system comprising: a first neuromorphic core including a first array of synaptic memory cells and postsynaptic neuron circuits, each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells, each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold; a second neuromorphic core including a second array of synaptic memory cells; and a neuron bus configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic core. 7. The system of claim 6, further comprising: a first transmit buffer coupled to postsynaptic neuron circuits, the first transmit buffer configured to store indications of a postsynaptic neuron circuit fire from each of the postsynaptic neuron circuits, the indications of postsynaptic neuron circuit fire identifying which of the postsynaptic neuron circuits fired; 8. The system of claim 7, further comprising: a second transmit buffer coupled to the first transmit buffer and the neuron bus, the second transmit buffer configured to serially shift the indications of postsynaptic neuron circuit fire to the neuron bus. 9. The system of claim 8, wherein the first transmit buffer is configured to: update the indications of postsynaptic neuron circuit fire at the first transmit buffer when one or more of the postsynaptic neuron circuits at the first neuromorphic core fires; determine if a second transmit buffer at the first neuromorphic core is serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus; transmit the indications of the postsynaptic neuron circuit fire from the first transmit buffer to the second transmit buffer when the second transmit buffer is not serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first transmit buffer; and clear the indications of the postsynaptic neuron circuit fire at the first transmit buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second transmit buffer. 10. The system of claim 6, further comprising: a first receive buffer coupled to the neuron bus, the first receive buffer configured to serially shift the indications of postsynaptic neuron circuit fire from the neuron bus to the second neuromorphic core. 11. The system of claim 10, further comprising a second receive buffer coupled to the first receive buffer, the second receive buffer configured to: determine if the first receive buffer is serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus; receive the indications of the postsynaptic neuron circuit fire from the first receive buffer to a second receive buffer at the second neuromorphic core when the first receive buffer is not serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first receive buffer; and clear the indications of the postsynaptic neuron circuit fire at the first receive buffer after receiving the indications of the postsynaptic neuron circuit fire to the second receive buffer. 12. The system of claim 6, wherein the second neuromorphic core includes a plurality of presynaptic neuron circuits configured to receive the indications of postsynaptic neuron circuit fire from the second receive buffer. 13. The system of claim 6, wherein the second neuromorphic core includes a plurality of presynaptic neuron circuits configured to receive the indications of postsynaptic neuron circuit fire from the neuron bus. 14. A system for communicating postsynaptic neuron states, the system comprising: a first neuromorphic core including a first array of synaptic memory cells and postsynaptic neuron circuits, each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells, each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold; a plurality of second neuromorphic cores, each of the second neuromorphic cores including a second array of synaptic memory cells; and a neuron bus configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic cores. 15. The system of claim 14, further comprising: a first transmit buffer coupled to postsynaptic neuron circuits, the first transmit buffer configured to store indications of a postsynaptic neuron circuit fire from each of the postsynaptic neuron circuits, the indications of postsynaptic neuron circuit fire identifying which of the postsynaptic neuron circuits fired; 16. The system of claim 15, further comprising: a second transmit buffer coupled to the first transmit buffer and the neuron bus, the second transmit buffer configured to serially shift the indications of postsynaptic neuron circuit fire to the neuron bus. 17. The system of claim 16, wherein the first transmit buffer is configured to: update the indications of postsynaptic neuron circuit fire at the first transmit buffer when one or more of the postsynaptic neuron circuits at the first neuromorphic core fires; determine if a second transmit buffer at the first neuromorphic core is serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus; transmit the indications of the postsynaptic neuron circuit fire from the first transmit buffer to the second transmit buffer when the second transmit buffer is not serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first transmit buffer; and clear the indications of the postsynaptic neuron circuit fire at the first transmit buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second transmit buffer. 18. The system of claim 14, wherein each of the second neuromorphic cores includes a first receive buffer coupled to the neuron bus, the first receive buffer configured to serially shift the indications of postsynaptic neuron circuit fire from the neuron bus to a respective one of the second neuromorphic cores. 19. The system of claim 18, wherein each of the second neuromorphic cores includes a second receive buffer coupled to the first receive buffer, the second receive buffer configured to: determine if the first receive buffer is serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus; receive the indications of the postsynaptic neuron circuit fire from the first receive buffer to a second receive buffer at the second neuromorphic core when the first receive buffer is not serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first receive buffer; and clear the indications of the postsynaptic neuron circuit fire at the first receive buffer after receiving the indications of the postsynaptic neuron circuit fire to the second receive buffer. 20. The system of claim 19, wherein the each of the second neuromorphic cores includes a plurality of presynaptic neuron circuits configured to receive the indications of postsynaptic neuron circuit fire from the second receive buffer.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system for communicating postsynaptic neuron states. The system includes a first neuromorphic core and a second neuromorphic core. The first neuromorphic core includes a first array of synaptic memory cells and postsynaptic neuron circuits. Each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells. Each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold. The second neuromorphic core includes a second array of synaptic memory cells. A neuron bus is configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic core.
G06N3061
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system for communicating postsynaptic neuron states. The system includes a first neuromorphic core and a second neuromorphic core. The first neuromorphic core includes a first array of synaptic memory cells and postsynaptic neuron circuits. Each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells. Each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold. The second neuromorphic core includes a second array of synaptic memory cells. A neuron bus is configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic core.
A truth maintenance method and system. The method includes receiving by a computer processor from RFID tags embedded in sensors, event data associated with events detected by said sensors. The computer processor associates portions of the event data with associated RFID tags and derives assumption data associated with each portion of the portions. The computer processor retrieves previous assumption data derived from and associated with previous portions of previous event data retrieved from the RFID tags and executes non monotonic logic with respect to the assumption data and the previous assumption data. In response, the computer processor generates and stores updated assumption data associated with the assumption data and the previous assumption data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving, by a computer processor of a computing device from RFID tags embedded in sensors, first event data associated with a first plurality of events detected by said sensors, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory; deriving, by said computer processor executing said inference engine software application, first assumption data associated with each portion of first portions of said first event data associated with associated RFID tags of said RFID tags, wherein said first assumption data comprises multiple sets of assumptions associated said plurality of events, wherein each set of said multiple sets comprises assumed event conditions and an associated plausibility percentage value, and wherein at least two sets of said multiple sets is associated with each event of said plurality of events; generating, by said computer processor based on results of said deriving and executing the Dempster Shafer theory with respect to a first pair of sets of said multiple sets with respect to a first event of said plurality of events, an initial recommendation for said event, said initial recommendation associated with a first selected set of said first pair of sets, said first selected set comprises a first plausibility percentage value; retrieving, by said computer processor from said truth maintenance system database, previous assumption data derived from and associated with previous portions of previous event data retrieved from said RFID tags embedded in said sensors, said previous assumption data derived at a time differing from a time of said deriving, said previous event data associated with previous events occurring at a different time from said first plurality of events; executing, by said computer processor, said non monotonic logic with respect to said first assumption data and said previous assumption data; additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first pair of sets and said previous assumption data; generating, by said computer processor based on results of said additionally executing and modifying said first plausibility percentage value of said first selected set, an updated recommendation for said first event, said updated recommendation associated with a second selected set of said first pair of sets, said second selected set differing from said first selected set; and generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated assumption data associated with said first assumption data and said previous assumption data, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with conditions of vehicles detected by said sensors. 2. The method of claim 1, further comprising: executing, by said computer processor based on said first updated assumption data, an action associated with objects detected by said sensors. 3. The method of claim 2, wherein said action comprises implementing a pay by usage cloud metering model associated with said objects. 4. The method of claim 1, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with objects detected by said sensors. 5. The method of claim 1, further comprising: receiving, by said computer processor from said RFID tags embedded in said sensors, second event data associated with a second plurality of events detected by said sensors, said second plurality of events occurring at a time differing from said first plurality of events; associating, by said computer processor, first portions of said second event data with associated RFID tags of said RFID tags; deriving, by said computer processor executing said inference engine software application, second assumption data associated with each portion of said first portions of said second event data; retrieving, by said computer processor from said truth maintenance system database, said previous assumption data, said first assumption data, and said first updated assumption data; executing, by said computer processor, said non monotonic logic with respect to said first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; generating, by said computer processor executing said non monotonic logic and said inference engine software application, second updated assumption data associated with first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; and storing, by said computer processor in said truth maintenance system database, said second assumption data. 6. The method of claim 1, wherein said generating first updated assumption data comprises retracting portions of said first assumption data and said previous assumption data. 7. The method of claim 1, further comprising: providing at least one support service for at least one of creating, integrating, hosting, maintaining, and deploying computer-readable code in said computing system, wherein the code in combination with the computing system is capable of performing: said receiving, said associating, said deriving, said retrieving, said executing, said generating, and said storing. 8. A computer program product, comprising a computer readable memory device storing a computer readable program code, said computer readable program code comprising an algorithm adapted to implement a method within a computing device, said method comprising: receiving, by a computer processor of said computing device from RFID tags embedded in sensors, first event data associated with a first plurality of events detected by said sensors, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory; deriving, by said computer processor executing said inference engine software application, first assumption data associated with each portion of first portions of said first event data associated with associated RFID tags of said RFID tags, wherein said first assumption data comprises multiple sets of assumptions associated said plurality of events, wherein each set of said multiple sets comprises assumed event conditions and an associated plausibility percentage value, and wherein at least two sets of said multiple sets is associated with each event of said plurality of events; generating, by said computer processor based on results of said deriving and executing the Dempster Shafer theory with respect to a first pair of sets of said multiple sets with respect to a first event of said plurality of events, an initial recommendation for said event, said initial recommendation associated with a first selected set of said first pair of sets, said first selected set comprises a first plausibility percentage value; retrieving, by said computer processor from said truth maintenance system database, previous assumption data derived from and associated with previous portions of previous event data retrieved from said RFID tags embedded in said sensors, said previous assumption data derived at a time differing from a time of said deriving, said previous event data associated with previous events occurring at a different time from said first plurality of events; executing, by said computer processor, said non monotonic logic with respect to said first assumption data and said previous assumption data; additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first pair of sets and said previous assumption data; generating, by said computer processor based on results of said additionally executing and modifying said first plausibility percentage value of said first selected set, an updated recommendation for said first event, said updated recommendation associated with a second selected set of said first pair of sets, said second selected set differing from said first selected set; and generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated assumption data associated with said first assumption data and said previous assumption data, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with conditions of vehicles detected by said sensors. 9. The computer program product of claim 8, wherein said method further comprises: executing, by said computer processor based on said first updated assumption data, an action associated with objects detected by said sensors. 10. The computer program product of claim 9, wherein said action comprises implementing a pay by usage cloud metering model associated with said objects. 11. The computer program product of claim 8, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with objects detected by said sensors. 12. The computer program product of claim 8, wherein said method further comprises: receiving, by said computer processor from said RFID tags embedded in said sensors, second event data associated with a second plurality of events detected by said sensors, said second plurality of events occurring at a time differing from said first plurality of events; associating, by said computer processor, first portions of said second event data with associated RFID tags of said RFID tags; deriving, by said computer processor executing said inference engine software application, second assumption data associated with each portion of said first portions of said second event data; retrieving, by said computer processor from said truth maintenance system database, said previous assumption data, said first assumption data, and said first updated assumption data; executing, by said computer processor, said non monotonic logic with respect to said first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; generating, by said computer processor executing said non monotonic logic and said inference engine software application, second updated assumption data associated with first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; and storing, by said computer processor in said truth maintenance system database, said second assumption data. 13. The computer program product of claim 8, wherein said generating first updated assumption data comprises retracting portions of said first assumption data and said previous assumption data. 14. A computing system comprising a computer processor coupled to a computer-readable memory unit, said memory unit comprising instructions that when enabled by the computer processor implements a method comprising: receiving, by said computer processor from RFID tags embedded in sensors, first event data associated with a first plurality of events detected by said sensors, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory; deriving, by said computer processor executing said inference engine software application, first assumption data associated with each portion of first portions of said first event data associated with associated RFID tags of said RFID tags, wherein said first assumption data comprises multiple sets of assumptions associated said plurality of events, wherein each set of said multiple sets comprises assumed event conditions and an associated plausibility percentage value, and wherein at least two sets of said multiple sets is associated with each event of said plurality of events; generating, by said computer processor based on results of said deriving and executing the Dempster Shafer theory with respect to a first pair of sets of said multiple sets with respect to a first event of said plurality of events, an initial recommendation for said event, said initial recommendation associated with a first selected set of said first pair of sets, said first selected set comprises a first plausibility percentage value; retrieving, by said computer processor from said truth maintenance system database, previous assumption data derived from and associated with previous portions of previous event data retrieved from said RFID tags embedded in said sensors, said previous assumption data derived at a time differing from a time of said deriving, said previous event data associated with previous events occurring at a different time from said first plurality of events; executing, by said computer processor, said non monotonic logic with respect to said first assumption data and said previous assumption data; additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first pair of sets and said previous assumption data; generating, by said computer processor based on results of said additionally executing and modifying said first plausibility percentage value of said first selected set, an updated recommendation for said first event, said updated recommendation associated with a second selected set of said first pair of sets, said second selected set differing from said first selected set; and generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated assumption data associated with said first assumption data and said previous assumption data, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with conditions of vehicles detected by said sensors. 15. The computing system of claim 14, wherein said method further comprises: executing, by said computer processor based on said first updated assumption data, an action associated with objects detected by said sensors. 16. The computing system of claim 15, wherein said action comprises implementing a pay by usage cloud metering model associated with said objects. 17. The computing system of claim 14, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with objects detected by said sensors. 18. The computing system of claim 14, wherein said method further comprises: receiving, by said computer processor from said RFID tags embedded in said sensors, second event data associated with a second plurality of events detected by said sensors, said second plurality of events occurring at a time differing from said first plurality of events; associating, by said computer processor, first portions of said second event data with associated RFID tags of said RFID tags; deriving, by said computer processor executing said inference engine software application, second assumption data associated with each portion of said first portions of said second event data; retrieving, by said computer processor from said truth maintenance system database, said previous assumption data, said first assumption data, and said first updated assumption data; executing, by said computer processor, said non monotonic logic with respect to said first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; generating, by said computer processor executing said non monotonic logic and said inference engine software application, second updated assumption data associated with first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; and storing, by said computer processor in said truth maintenance system database, said second assumption data.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A truth maintenance method and system. The method includes receiving by a computer processor from RFID tags embedded in sensors, event data associated with events detected by said sensors. The computer processor associates portions of the event data with associated RFID tags and derives assumption data associated with each portion of the portions. The computer processor retrieves previous assumption data derived from and associated with previous portions of previous event data retrieved from the RFID tags and executes non monotonic logic with respect to the assumption data and the previous assumption data. In response, the computer processor generates and stores updated assumption data associated with the assumption data and the previous assumption data.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A truth maintenance method and system. The method includes receiving by a computer processor from RFID tags embedded in sensors, event data associated with events detected by said sensors. The computer processor associates portions of the event data with associated RFID tags and derives assumption data associated with each portion of the portions. The computer processor retrieves previous assumption data derived from and associated with previous portions of previous event data retrieved from the RFID tags and executes non monotonic logic with respect to the assumption data and the previous assumption data. In response, the computer processor generates and stores updated assumption data associated with the assumption data and the previous assumption data.
A method and system is described for modeling the content evolution of an accessed document and predicting an associated outcome for said document. The system accesses a document but can further receive additional tags, metadata, or related information that characterizes the nature of such text collection. The invention applies various processing to separate the document into elements and performs semantic modeling to create a narrative model that describes the evolution of the contents of the elements in terms of their respective sequencing. This system then uses a set of training documents with target values assigned to them to predict an associated outcome for the accessed document. The most relevant subset of a training set can be selected by matching metadata information that characterize the accessed document and a collection of metadata that characterize other broad document sets. Such characterization is done using graph partitioning or other community detection methods from metadata information that characterize the document sets and relations between multiple sets of such documents. The outcome of the method may apply to prediction of economic value of a events described by the accessed document, success measures of the document quality, or discovery of related content with similar associated outcome to the accessed document.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: accessing a document with attributes and tags that sequentially order elements of the document; extracting a selection of document text belonging to a specific set of attributes; creating a narrative model that represents evolution of semantics with respect to the sequentially ordered elements; accessing a set of target values and training documents, wherein the target value quantifies an outcome associated with one or more of the training documents in the set; and predicting an outcome associated with the accessed document. 2. The method of claim 1, wherein the semantics includes: statistical methods including one or a combination of sentiment analysis, semantic analysis, pragmatics analysis, latent class analysis, support vector machines, semantic orientation, pointwise mutual information and any document-type specific analysis. 3. The method of claim 1, further comprising: associating the training documents with communities; training a classifier between the communities and the target values; detecting relationships between the elements of the accessed document and the communities; calculating weighting based on the detected relationships, and wherein predicting the outcome is based on the classifiers and the calculated weightings. 4. The method of claim 3, wherein: obtaining a collection of topics over a corpus of documents using latent models based on the words in those documents, and using significant words in the significant topics representing a document as tags in associating documents with communities. 5. The method of claim 3, further comprising: training multiple predictors between the communities and the target values, and wherein predicting the outcome is further based on these predictors. 6. The method recited in claim 1, wherein the accessed document includes: a collection of elements arranged in its temporal succession in which elements of the documents can be accessed according to a specific set of attributes. 7. The method recited in claim 1, wherein creating the narrative model includes: creating a branching narrative that represents multiple path possibilities when applicable to the document. 8. The method of claim 1, wherein generating a prediction includes: creating narrative models for the training documents, wherein generating the prediction is further based on the narrative models for the training documents. 9. The method recited in claim 1, wherein creating the narrative model includes: generating a sequence of semantics descriptor vectors that are indexed to the sequentially ordered elements; analysing the change and association of semantics from element to element within documents with one or more additional features, tags, attributes; and representing as a collection of vectors. 10. The method recited in claim 1, wherein creating the narrative model includes: generating a contingency matrix; and using the contingency matrix in semantically analyzing the document, wherein semantically analyzing the document yields data that is inputted to the narrative model. 11. The method recited in claim 10 further comprising: training a lexicon of distributed word vectors on individual words with generative models that represent topics as frequencies of words and tracing rates of word usage with respect to the elements in the document, and wherein: generating the contingency matrix includes modifying word frequency data using the lexicon. 12. The method of claim 1, wherein predicting the outcome includes: transforming the narrative model through alignment transformation to match the number of coefficients between models with different numbers of elements. 13. The method of claim 1, wherein predicting the outcome further includes: training a classifier using the set of target values and training documents; and inputting the narrative model into the classifier wherein the classifier is used to generate the prediction. 14. The method of claim 1, wherein predicting the outcome includes: training an ensemble model that includes classifiers and/or regression models using the set of target values and training documents; and using the narrative model from the accessed document with the ensemble model to predict the associated outcome. 15. A system comprising a processor having instructions operable to cause the processor to: access a document with attributes and tags that sequentially order elements of the document; extract a selection of document text belonging to a specific set of attributes; create a narrative model that represents evolution of semantics with respect to the sequentially ordered elements; access a set of target values and training documents, wherein the target value quantifies an outcome associated with one or more of the training documents in the set; and generate a prediction of an outcome associated with the accessed document. 16. The system of claim 15, wherein the semantics includes: applying statistical methods including one or a combination of sentiment analysis, semantic analysis, pragmatics analysis, latent class analysis, support vector machines, semantic orientation, pointwise mutual information and any document-type specific analysis. 17. The system of claim 15, further comprises: associate the training documents with communities; train a classifier between the communities and the target values; detect relationships between the elements of the accessed document and the communities; calculate weighting based on the detected relationships, and wherein the prediction of the outcome is based on the classifiers and the calculated weightings. 18. The system of claim 15 being embedded in a word processing system. 19. The system of claim 15, wherein the prediction includes: metadata and factors having predictive value with respect to the outcome associated with the accessed document. 20. The system of claim 15, wherein the prediction: finds documents in a database that are closest in terms of outcome associated with the accessed document as found by the prediction method, and reports these documents as a content discovery output.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method and system is described for modeling the content evolution of an accessed document and predicting an associated outcome for said document. The system accesses a document but can further receive additional tags, metadata, or related information that characterizes the nature of such text collection. The invention applies various processing to separate the document into elements and performs semantic modeling to create a narrative model that describes the evolution of the contents of the elements in terms of their respective sequencing. This system then uses a set of training documents with target values assigned to them to predict an associated outcome for the accessed document. The most relevant subset of a training set can be selected by matching metadata information that characterize the accessed document and a collection of metadata that characterize other broad document sets. Such characterization is done using graph partitioning or other community detection methods from metadata information that characterize the document sets and relations between multiple sets of such documents. The outcome of the method may apply to prediction of economic value of a events described by the accessed document, success measures of the document quality, or discovery of related content with similar associated outcome to the accessed document.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method and system is described for modeling the content evolution of an accessed document and predicting an associated outcome for said document. The system accesses a document but can further receive additional tags, metadata, or related information that characterizes the nature of such text collection. The invention applies various processing to separate the document into elements and performs semantic modeling to create a narrative model that describes the evolution of the contents of the elements in terms of their respective sequencing. This system then uses a set of training documents with target values assigned to them to predict an associated outcome for the accessed document. The most relevant subset of a training set can be selected by matching metadata information that characterize the accessed document and a collection of metadata that characterize other broad document sets. Such characterization is done using graph partitioning or other community detection methods from metadata information that characterize the document sets and relations between multiple sets of such documents. The outcome of the method may apply to prediction of economic value of a events described by the accessed document, success measures of the document quality, or discovery of related content with similar associated outcome to the accessed document.
Mechanisms for clarifying an input question are provided. A question is received for generation of an answer. A set of candidate answers is generated based on an analysis of a corpus of information. Each candidate answer has an evidence passage supporting the candidate answer. Based on the set of candidate answers, a determination is made as to whether clarification of the question is required. In response to a determination that clarification of the question is required, a request is sent for user input to clarify the question. User input is received from the computing device in response to the request and at least one candidate answer in the set of candidate answers is selected as an answer for the question based on the user input.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-20. (canceled) 21. A method, in a data processing system comprising a processor and a memory, for clarifying an input question, the method comprising: generating, in the data processing system, a set of candidate answers for an input question, wherein each candidate answer in the set of candidate answers corresponds to an evidence passage supporting the candidate answer as answering the input question; determining, in the data processing system, based on the set of candidate answers, whether clarification of the input question is required; and in response to a determination that clarification of the input question is required: identifying, by the data processing system, a differentiating factor in evidence passage of at least two candidate answers in the set of candidate answers; outputting, by the data processing system, a request for user input to clarify the input question, wherein the request for user input is generated based on the identified differentiating factor; and selecting, by the data processing system, at least one candidate answer in the set of candidate answers as an answer for the input question based on a user input in response to the request. 22. The method of claim 21, wherein the request for user input comprises a clarification question directed to the differentiating factor and a plurality of user selectable potential answers to the clarification question, each answer corresponding to a portion of a corresponding one of the evidence passages, of the at least two candidate answers, directed to the differentiating factor. 23. The method of claim 21, wherein the request for user input comprises a clarification question that comprises a potential answer corresponding to the differentiating factor in the content of the clarification question and user selectable potential answers in the affirmative and negative for answering the clarification question. 24. The method of claim 21, wherein the request for user input comprises a clarification question that is directed to the differentiating factor and a free-form text entry field into which a user may input a textual answer to the clarification question. 25. The method of claim 21, wherein determining, based on the set of candidate answers, whether clarification of the input question is required comprises determining that clarification of the input question is required in response to the set of candidate answers comprising a plurality of candidate answers with corresponding confidence scores equal to or higher than a predetermined threshold confidence score. 26. The method of claim 21, wherein selecting at least one candidate answer in the set of candidate answers as an answer for the input question comprises: updating the set of candidate answers based on the user input; and selecting the at least one candidate answer from the updated set of candidate answers. 27. The method of claim 26, wherein updating the set of candidate answers comprises modifying confidence scores associated with one or more of the candidate answers in the set of candidate answers based on the user input, wherein confidence scores for candidate answers having evidence passages corresponding to the user input are increased and candidate answers having evidence passages not corresponding to the user input are decreased. 28. The method of claim 26, wherein updating the set of candidate answers comprises removing candidate answers, from the set of candidate answers, that have evidence passages that do not correspond to the user input. 29. The method of claim 26, wherein selecting the at least one candidate answer from the updated set of candidate answers comprises: performing synthesis stage, merging and ranking stage, and final answer selecting stage operations of a question and answer (QA) system pipeline on the updated set of candidate answers. 30. The method of claim 21, wherein the request comprises a clarifying question posed to a user, wherein the clarifying question is generated based on the differentiating factor and is constructed such that an answer to the clarifying question indicates a correctness of one of the at least two candidate answers based on their associated evidence passages. 31. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: generate a set of candidate answers for an input question, wherein each candidate answer in the set of candidate answers corresponds to an evidence passage supporting the candidate answer as answering the input question; determine, based on the set of candidate answers, whether clarification of the input question is required; and in response to a determination that clarification of the input question is required: identify a differentiation factor in evidence passages of at least two candidate answers in the set of candidate answers; output a request for user input to clarify the input question, wherein the request for user input is generated based on the identified differentiating factor; and select at least one candidate answer in the set of candidate answers as an answer for the input question based on a user response to the request. 32. The computer program product of claim 31, wherein the request for user input comprises a clarification question directed to the differentiating factor and a plurality of user selectable potential answers to the clarification question, each answer corresponding to a portion of a corresponding one of the evidence passages, of the at least two candidate answers, directed to the differentiating factor. 33. The computer program product of claim 31, wherein the request for user input comprises a clarification question that comprises a potential answer corresponding to the differentiating factor in the content of the clarification question and user selectable potential answers in the affirmative and negative for answering the clarification question. 34. The computer program product of claim 31, wherein the request for user input comprises a clarification question that is directed to the differentiating factor and a free-form text entry field into which a user may input a textual answer to the clarification question. 35. The computer program product of claim 31, wherein determining, based on the set of candidate answers, whether clarification of the input question is required comprises determining that clarification of the input question is required in response to the set of candidate answers comprising a plurality of candidate answers with corresponding confidence scores equal to or higher than a predetermined threshold confidence score. 36. The computer program product of claim 31, wherein the computer readable program further causes the computing device to select at least one candidate answer in the set of candidate answers as an answer for the input question at least by: updating the set of candidate answers based on the user input; and selecting the at least one candidate answer from the updated set of candidate answers. 37. The computer program product of claim 36, wherein the computer readable program further causes the computing device to update the set of candidate answers at least by modifying confidence scores associated with one or more of the candidate answers in the set of candidate answers based on the user input, wherein confidence scores for candidate answers having evidence passages corresponding to the user input are increased and candidate answers having evidence passages not corresponding to the user input are decreased. 38. The computer program product of claim 36, wherein the computer readable program further causes the computing device to update the set of candidate answers at least by removing candidate answers, from the set of candidate answers, that have evidence passages that do not correspond to the user input. 39. The computer program product of claim 31, wherein the request comprises a clarifying question posed to a user, wherein the clarifying question is generated based on the differentiating factor and is constructed such that an answer to the clarifying question indicates a correctness of one of the at least two candidate answers based on their associated evidence passages. 40. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: generate a set of candidate answers for an input question, wherein each candidate answer in the set of candidate answers corresponds to an evidence passage supporting the candidate answer as answering the input question; determine, based on the set of candidate answers, whether clarification of the input question is required; and in response to a determination that clarification of the input question is required: identify a differentiation factor in evidence passages of at least two candidate answers in the set of candidate answers; output a request for user input to clarify the input question, wherein the request for user input is generated based on the identified differentiating factor; and select at least one candidate answer in the set of candidate answers as an answer for the input question based on a user response to the request.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Mechanisms for clarifying an input question are provided. A question is received for generation of an answer. A set of candidate answers is generated based on an analysis of a corpus of information. Each candidate answer has an evidence passage supporting the candidate answer. Based on the set of candidate answers, a determination is made as to whether clarification of the question is required. In response to a determination that clarification of the question is required, a request is sent for user input to clarify the question. User input is received from the computing device in response to the request and at least one candidate answer in the set of candidate answers is selected as an answer for the question based on the user input.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Mechanisms for clarifying an input question are provided. A question is received for generation of an answer. A set of candidate answers is generated based on an analysis of a corpus of information. Each candidate answer has an evidence passage supporting the candidate answer. Based on the set of candidate answers, a determination is made as to whether clarification of the question is required. In response to a determination that clarification of the question is required, a request is sent for user input to clarify the question. User input is received from the computing device in response to the request and at least one candidate answer in the set of candidate answers is selected as an answer for the question based on the user input.
A system for providing semantic reasoning includes an extended semantic model, a semantic knowledge database, and an inference engine. The extended semantic model includes existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic including conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively. The semantic knowledge database includes existing nodes and existing links and the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships. The inference engine is configured to add new nodes and new links to the semantic knowledge database by following the logic of the extended semantic model.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for providing semantic reasoning comprising: an extended semantic model comprising existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic comprising conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively; a semantic knowledge database comprising existing nodes and existing links and wherein the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships; an inference engine configured to add new nodes and new links to the semantic knowledge database by following the logic of the extended semantic model; and a computing system comprising at least a processor configured to execute the logic of the extended semantic model. 2. The system of claim 1, further comprising a data normalizer configured to receive data and normalize them to the existing concepts and existing relationships of the extended semantic model. 3. The system of claim 1, further comprising a repertoire library and wherein the repertoire library comprises a set of functions that are identified in the extended semantic model and are invoked by the inference engine. 4. The system of claim 3, wherein the set of functions comprise one of resolver, context-gate, collector, qualifier or operators, and wherein each type of function fulfills a specific role in the inference process. 5. The system of claim 3, wherein each function is assigned a unique identification number. 6. The system of claim 1, further comprising a user interface configured to query the semantic knowledge database and to provide decision support to a user. 7. The system of claim 1, wherein the extended semantic model further comprises upstream concepts and downstream concepts and wherein the upstream concepts are connected to the downstream concepts via relationships that are inferred from the upstream. 8. The system of claim 7, wherein the extended semantic model further comprises notes and wherein each note comprises data associated with an existing concept. 9. The system of claim 8, wherein said data comprise values associated with an existing concept or a chain of links connecting data within an upstream concept with a downstream concept. 10. The system of claim 1, wherein the extended semantic model further comprises properties and wherein said properties are used by the inference engine for implementing a specific logic for instancing a link. 11. The system of claim 1, wherein said existing concepts comprise fact concepts and wherein the fact concepts are directly observed or known. 12. The system of claim 11, wherein said existing concepts comprise insight concepts, and wherein the insight concepts are inferred by the inference engine from fact concepts. 13. The system of claim 12, wherein relationships between fact concepts and insight concepts are automatically inferred by the inference engine. 14. The system of claim 13 wherein the semantic knowledge database further comprises fact nodes corresponding to the fact concepts and insight nodes corresponding to the insight concepts. 15. The system of claim 8, wherein the semantic knowledge database further comprises attributes of nodes and links and wherein said attributes represent instances of said notes. 16. The system of claim 1, wherein the inference engine adds new concepts and relationships to the semantic knowledge database by first creating instances of fact nodes and associated links, next recursively updating downstream nodes, and then instancing downstream insight nodes and associated links. 17. The system of claim 1, further comprising an extensible meta-language and wherein the extensible meta-language comprises a vocabulary of words and a syntax, and wherein new words are configured to be added to the vocabulary by building upon existing words. 18. The system of claim 17, further comprising an interpreter of the extensible meta-language and wherein the interpreter is configured to provide an interface between the inference engine and the extended semantic model and the semantic knowledge database. 19. The system of claim 18 wherein words in the extensible meta-language are used to query nodes, links and attributes in the semantic knowledge database. 20. The system of claim 18, wherein the extensible meta-language comprises an extension of a FORTH language and wherein the interpreter comprises a FORTH language interpreter. 21. The system of claim 1, wherein the computing system comprises a distributed computing system. 22. The system of claim 1, wherein the extended semantic model is extended by adding new concepts, new fact concepts, new insight concepts, or another extended semantic model. 23. A method for providing semantic reasoning comprising: providing an extended semantic model comprising existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic comprising conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively; providing a semantic knowledge database comprising existing nodes, and existing links and wherein the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships; providing an inference engine configured to add new concepts and relationships to the semantic knowledge database by following the logic of the extended semantic model; and providing a computing system comprising at least a processor configured to execute the logic of the extended semantic model.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system for providing semantic reasoning includes an extended semantic model, a semantic knowledge database, and an inference engine. The extended semantic model includes existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic including conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively. The semantic knowledge database includes existing nodes and existing links and the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships. The inference engine is configured to add new nodes and new links to the semantic knowledge database by following the logic of the extended semantic model.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system for providing semantic reasoning includes an extended semantic model, a semantic knowledge database, and an inference engine. The extended semantic model includes existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic including conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively. The semantic knowledge database includes existing nodes and existing links and the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships. The inference engine is configured to add new nodes and new links to the semantic knowledge database by following the logic of the extended semantic model.
Methods and systems are described for setting operation rules for use in controlling aspects of a home automation system. According to at least one embodiment, an apparatus for establishing operation rules in a home automation system includes a processor, a memory in electronic communication with the processor, and instructions stored in the memory which are executable by the processor to receive a spoken command having a plurality of rule setting terms, establish at least one operation rule for the home automation system based on the spoken command, and store the at least one operation rule for later use by the home automation system.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An apparatus for establishing operation rules in a home automation system, comprising: a processor; a memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: receive a spoken command having a plurality of rule setting terms; establish at least one operation rule for the home automation system based on the spoken command; and store the at least one operation rule for later use by the home automation system. 2. The apparatus of claim 1, wherein the instructions are executable by the processor to: initiate a rules mode for the home automation system prior to receiving the spoken command. 3. The apparatus of claim 2, wherein the instructions are executable by the processor to: terminate the rules mode after establishing the at least one operation rule. 4. The apparatus of claim 2, wherein the initiating the rules mode includes receiving a touch input at a control panel of the home automation system. 5. The apparatus of claim 2, wherein the initiating the rules mode includes receiving a spoken trigger word or an audible sound. 6. The apparatus of claim 1, wherein the instructions are executable by the processor to: deliver a request for clarification of the spoken command. 7. The apparatus of claim 6, wherein the request for clarification includes an audible message. 8. The apparatus of claim 6, wherein the request for clarification includes a displayed message visible on a control panel of the home automation system. 9. The apparatus of claim 1, wherein the instructions are executable by the processor to: generate a message that includes a restatement of the spoken command and a request for confirmation of the correctness of the restatement of the spoken command. 10. The apparatus of claim 1, wherein the instructions are executable by the processor to: generate a message to a user of the home automation system that includes a restatement of the at least one operation rule and requests confirmation of the accuracy of the at least one operation rule. 11. A computer-program product for establishing operation rules in a home automation system, the computer-program product comprising a non-transitory computer-readable medium storing instructions executable by a processor to: initiate a rule setting mode for the home automation system; receive a spoken command having a plurality of rule setting terms; generate at least one operation rule for the home automation system based on the spoken command; and generate a message that includes a restatement of the at least one operation rule. 12. The computer-program product of claim 11, wherein at least some of the plurality of rule setting terms are defined during installation of the home automation system at a property. 13. The computer-program product of claim 11, wherein at least some of the plurality of rule setting terms are generic to a plurality of different home automation systems associated with a plurality of different properties. 14. The computer-program product of claim 11, wherein the plurality of rule setting terms include at least one static term that is generic to a plurality of different home automation systems, and at least one dynamic term that is uniquely defined for the home automation system. 15. The computer-program product of claim 11, wherein the instructions are executable by the processor to: store the at least one operation rule for later use by the home automation system, wherein storing the at least one operation rule includes storing in a local database of the home automation system. 16. The computer-program product of claim 11, wherein the instructions are executable by the processor to: generate a message requesting confirmation of the spoken command. 17. The computer-program product of claim 16, wherein the instructions are executable by the processor to: receive confirmation of the spoken command. 18. A computer-implemented method for establishing operation rules in a home automation system, comprising: initiating a rules mode for the home automation system; receiving a spoken command having a plurality of rule setting terms; establishing at least one operation rule for the home automation system using the spoken command; storing the at least one operation rule; and operating at least one function of the home automation system based on the at least one stored operation rule. 19. The method of claim 18, wherein the plurality of rule setting terms include at least one static term that is pre-programmed into the home automation system prior to installation. 20. The method of claim 18, wherein the plurality of rule setting terms include at least one dynamic term that is uniquely programmed into the home automation system no sooner than installation at a property.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods and systems are described for setting operation rules for use in controlling aspects of a home automation system. According to at least one embodiment, an apparatus for establishing operation rules in a home automation system includes a processor, a memory in electronic communication with the processor, and instructions stored in the memory which are executable by the processor to receive a spoken command having a plurality of rule setting terms, establish at least one operation rule for the home automation system based on the spoken command, and store the at least one operation rule for later use by the home automation system.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods and systems are described for setting operation rules for use in controlling aspects of a home automation system. According to at least one embodiment, an apparatus for establishing operation rules in a home automation system includes a processor, a memory in electronic communication with the processor, and instructions stored in the memory which are executable by the processor to receive a spoken command having a plurality of rule setting terms, establish at least one operation rule for the home automation system based on the spoken command, and store the at least one operation rule for later use by the home automation system.
The operation of an application on a first device may be guided by a user operating a second device. The application on the first device may present a character on a display of the first device and obtain an audio signal of speech of a user of the first device. Audio data may be transmitted to the second device and corresponding audio may be played from speakers of the second device. The second device may present suggestions of phrases to be spoken by the character displayed on the first device. A user of the second device may select a phrase to be spoken by the character. Phrase data may be transmitted to the first device, and the first device may generate audio of the character speaking the phrase using a text-to-speech voice associated with the character.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for guiding the operation of an application on a first device, the system comprising a first device and a second device, wherein: the application on the first device is configured to: present a character on a display of the first device, wherein the character is associated with a text-to-speech voice, obtain an audio signal from a microphone of the first device, wherein the audio signal comprises speech of a user of the first device, and transmit audio data to the second device in real time, wherein the audio data is generated from the audio signal; the second device is configured to: receive the audio data from the first device, cause audio to be played using the audio data, present a plurality of phrases as suggestions of phrases to be spoken by the character, receive an input from a user of the second device that specifies a selected phrase to be spoken by the character, and cause phrase data to be transmitted to the first device corresponding to the selected phrase; and the application on the first device is configured to: receive the phrase data corresponding to the selected phrase, and cause audio to be played from the first device corresponding to the selected phrase, wherein the audio is generated using the text-to-speech voice of the character.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The operation of an application on a first device may be guided by a user operating a second device. The application on the first device may present a character on a display of the first device and obtain an audio signal of speech of a user of the first device. Audio data may be transmitted to the second device and corresponding audio may be played from speakers of the second device. The second device may present suggestions of phrases to be spoken by the character displayed on the first device. A user of the second device may select a phrase to be spoken by the character. Phrase data may be transmitted to the first device, and the first device may generate audio of the character speaking the phrase using a text-to-speech voice associated with the character.
G06N3006
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The operation of an application on a first device may be guided by a user operating a second device. The application on the first device may present a character on a display of the first device and obtain an audio signal of speech of a user of the first device. Audio data may be transmitted to the second device and corresponding audio may be played from speakers of the second device. The second device may present suggestions of phrases to be spoken by the character displayed on the first device. A user of the second device may select a phrase to be spoken by the character. Phrase data may be transmitted to the first device, and the first device may generate audio of the character speaking the phrase using a text-to-speech voice associated with the character.
Systems and methods achieving scalable and efficient connectivity in neural algorithms by re-calculating network connectivity in an event-driven way are disclosed. The disclosed solution eliminates the storing of a massive amount of data relating to connectivity used in traditional methods. In one embodiment, a deterministic LFSR is used to quickly, efficiently, and cheaply re-calculate these connections on the fly. An alternative embodiment caches some or all of the LFSR seed values in memory to avoid sequencing the LFSR through all states needed to compute targets for a particular active neuron. Additionally, connections may be calculated in a way that generates neural networks with connections that are uniformly or normally (Gaussian) distributed.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A scalable system for recalculating, in an event-driven manner, property parameters including connectivity parameters of a neural network, the system comprises: an input component that receives a time varying input signal; a storage component for storing the property parameters of the neural network; a state machine capable of recalculating property parameters of the neural network, wherein the property parameters include connectivity among neurons of the neural network; and an output component that generates output signals reflective of the calculated property parameters of the neural network and the input signal. 2. The system of claim 1, wherein the state machine is capable of generating a unique identifying number for each neuron in the neural network. 3. The system of claim 1, wherein the state machine comprises a Linear Feedback Shift Register (LSFR). 4. The system of claim 3, wherein the LFSR is configured to generate certain property parameters including connectivity. 5. The system of claim of 2, wherein the state machine comprises a neuron identification counter (Neuron_Index) with a first predefined initial value and a neuron connectivity counter (Conn_Counter) with a second predefined initial value. 6. The system of claim 5, wherein the state machine is initialized with a third predefined initial value and configured to perform the following: comparing Conn_Counter value to a predefined final value (MAX_Conn); if Conn_Counter value is not equal to MAX_Conn, causing the state machine to update, changing the Conn_Counter to a next value, and updating property parameters of the neuron identified by Neuron_Index in response to the input signal; if Conn_Counter value is equal to MAX_Conn, changing Neuron_Index to a next value; comparing Neuron_Index to a predefined total number of neurons in the neural network (MAX_Neuron); and if Neuron_Index value is not equal to MAX_Neuron, resetting the Conn_Counter to the second predefined initial value and repeating the above steps for next neuron as identified by the Neuron_Index. 7. The system of claim 1, wherein the input component converts the received time varying input signal into a sequence of spikes. 8. The system of claim 1, wherein the state machine recalculates connectivity of a neuron currently being evaluated using a predefined initial value corresponding to the neuron currently being evaluated only when the neuron currently being evaluated fires in response to the input signal. 9. The system of claim 5, wherein the storage component is configured to store predefined initial values corresponding to the neurons in the neural network for the state machine and the state machine is configured to perform the following: determining whether a neuron identified by the Neuron_Index fires in response to the input signal; if the neuron identified by the Neuron_Index fires, retrieving from the storage component a predefined initial value corresponding to the neuron identified by the Neuron_Index, initializing Conn_Counter to the second predefined initial value; comparing Conn_Counter value to a predefined final value (MAX_Conn); if Conn_Counter value is not equal to MAX_Conn, causing the state machine to update to a next state, changing Conn_Counter to a next value, and repeating this step; if Conn_Counter value is equal to MAX_Conn, changing Neuron_Index to a next value; comparing Neuron_Index to a predefined maximum number of neurons in the neural network (MAX_Neuron); and if Neuron_Index value is not equal to MAX_Neuron, repeating the above steps for a next neuron as identified by Neuron_Index. 10. The system of claim 1, wherein the storage component includes a cache for storing predefined initial values used by the state machine for recalculating certain property parameters. 11. The system of claim 10, wherein the state machine is configured to further perform the following: calculating an initial value necessary for recalculating certain connectivity parameters upon determining that the initial value necessary for recalculating certain connectivity parameters are not stored in the cache; and updating the cache to include the calculated initial value according to a predetermined cache rule. 12. The system of claim 8, wherein the state machine is configured to further perform the following: maintaining a list of future firing neurons, wherein the recalculating at each time step is conducted only on neurons identified on the list; for each target neuron of a neuron that fires at a current time step, comparing current membrane potential of that target neuron to a corresponding predefined firing threshold of that target neuron; adding identity of a target neuron to the list of future firing neurons if the current membrane potential of that target neuron exceeds the corresponding predefined firing threshold; and removing an identity of a target neuron from the list of future firing neurons if current membrane potential of that target neuron is below the corresponding predefined firing threshold. 13. The system of claim 1, wherein the state machine comprises: a state skip unit with a number (N) of feedback networks, wherein each feedback network generates, in a number (M) of clock cycles, a state of the state machine after sequentially advancing a programmable number (P) of states from the state machine's current state, and M<P; and a multiplexing circuit for updating the state machine by selecting one of the N feedback networks. 14. The system of claim 2, wherein the identifying number of a target neuron in the network is the sum of a predefined offset value and multiple random numbers generated by the state machine so as to center the normalized distribution of the target neurons. 15. The system of claim 1, wherein the state machine is also used to generate a connection type for each neuron of the network. 16. The system of claim 1, wherein the property parameters further include neural delays of each neuron in the network. 17. The system of claim 1, further comprises: a plurality of processing elements, wherein each processing element has a state machine and is capable of calculating property parameters of a subset of neurons of the neural network. 18. The system of claim 2, wherein the state machine is configured to cache predefined initial values of the state machine corresponding to N neurons last fired. 19. A computer-implemented method for recalculating network property parameters of a neural network including connectivity parameters in an event-driven manner, the method comprises: initializing property parameters of the neural network; receiving, at an evaluating neuron of the neural network, a neural input corresponding to a time varying input signal to the neural network; recalculating by a state machine of the neural network at least some of the property parameters of the evaluating neuron, wherein the property parameters are random but determined after initialization; determining whether the evaluating neuron is to generate a neural output to its target neurons in the neural network; and if the evaluating neuron is determined to generate a neural output to its target neurons in the neural network, propagating the output of the evaluating neuron to its target neurons. 20. The method of claim 19, wherein the property parameters of the neural network include one or more parameters of the group consisting of maximum number of neurons in the neural network, one or more random number generation seed values, neural axonal and dendritic delay values, positive connectivity strength values, negative connectivity strength values, neuron refractory period, decay rate of neural membrane potential, neuron membrane potential, and neuron membrane leakage parameter. 21. The method of claim 19, wherein the determining comprises: calculating current membrane potential of the evaluating neuron based on the neural input and the connectivity parameters of the evaluating neuron; comparing the calculated membrane potential to a firing threshold value of the evaluating neuron; and reporting that the evaluating neuron is to generate an output if the calculated membrane potential exceeds the firing threshold value. 22. The method of claim 19, wherein the recalculating comprises: using a pseudo-random number generator with a pre-defined start value to calculate the property parameters. 23. The method of claim 22, wherein the pseudo-random number generator comprises a Linear Feedback Shift Register (LFSR). 24. The method of claim 19, wherein the recalculating comprises: retrieving a stored pre-defined initial value corresponding to the evaluating neuron; and calculating connectivity parameters of the evaluating neuron using the retrieved seed value. 25. The method of claim 19, wherein the recalculating comprises calculating connectivity of a neuron currently being evaluated only when the neuron currently being evaluated fires in response to the input signal. 26. The method of claim 19, further comprises retrieving an initial value of the state machine for recalculating the connectivity parameters of the evaluating neuron from a cache coupled to the state machine. 27. The method of claim 26, wherein the retrieving further comprises: calculating the initial value of the state machine for recalculating the connectivity parameters of the evaluating neuron upon determining that the cache does not store the initial value; and updating the cache to include the calculated initial value according to a predetermined cache rule. 28. The method of claim 19, further comprises: maintaining a list of future firing neurons, wherein the updating at each time step is conducted only on neurons identified on the list; for each target neuron of a neuron that fires at a current time step, comparing the current membrane potential of that target neuron to a corresponding predefined firing threshold of that target neuron; adding an identity of a target neuron to the list of future firing neurons if the current membrane potential of that target neuron exceeds the corresponding predefined firing threshold; and removing an identity of a target neuron from the list of future firing neurons if the current membrane potential of that target neuron is below the corresponding predefined firing threshold. 29. The method of claim 19, wherein the recalculating further comprises: sequentially advancing a programmable number (P) of states beyond the current state of the state machine in a number (M) of clock cycles, wherein M<P. 30. The method of claim 19, further comprises: taking sum of the state machine results to form a uniform distribution; and adding an offset to center the normalized distribution in calculating addresses of the target neurons. 31. The method of claim 19, wherein the recalculating further comprises using the state machine to generate a distinct connection type for each neuron of the network. 32. The method of claim 19, wherein the property parameters include neural delays of the neurons. 33. The method of claim 19, wherein the recalculating is carried out by a plurality of processing elements, wherein each processing element has a state machine and is capable of calculating property parameters of a subset of neurons of the neural network.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods achieving scalable and efficient connectivity in neural algorithms by re-calculating network connectivity in an event-driven way are disclosed. The disclosed solution eliminates the storing of a massive amount of data relating to connectivity used in traditional methods. In one embodiment, a deterministic LFSR is used to quickly, efficiently, and cheaply re-calculate these connections on the fly. An alternative embodiment caches some or all of the LFSR seed values in memory to avoid sequencing the LFSR through all states needed to compute targets for a particular active neuron. Additionally, connections may be calculated in a way that generates neural networks with connections that are uniformly or normally (Gaussian) distributed.
G06N304
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods achieving scalable and efficient connectivity in neural algorithms by re-calculating network connectivity in an event-driven way are disclosed. The disclosed solution eliminates the storing of a massive amount of data relating to connectivity used in traditional methods. In one embodiment, a deterministic LFSR is used to quickly, efficiently, and cheaply re-calculate these connections on the fly. An alternative embodiment caches some or all of the LFSR seed values in memory to avoid sequencing the LFSR through all states needed to compute targets for a particular active neuron. Additionally, connections may be calculated in a way that generates neural networks with connections that are uniformly or normally (Gaussian) distributed.
An online system, such as a social networking system, generates shared models for one or more clusters of categories. A shared model for a cluster is common to the categories assigned to the cluster. In this manner, the shared models are specific to the group of categories (e.g., selected content providers) in each cluster while requiring a reasonable computational complexity for the online system. The categories are clustered based on the performance of a model specific to a category on data for other categories.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: selecting a set of seed content providers from a set of content providers; for each seed content provider, training a model that predicts a likelihood that a user will perform an interaction with a content item provided by the seed content provider; clustering the seeds into a smaller number of clusters, where the seeds are clustered based on a performance of each model for a corresponding seed on data of the other seeds; and for each of the clusters, training a shared model for all of the seeds of the cluster. 2. The method of claim 1, further comprising: receiving a request for predicting user responses to a content item associated with a content provider; and querying a database of the shared models to identify a shared model for the content provider. 3. The method of claim 1, further comprising: assigning a content provider that is not a seed content provider to a cluster. 4. The method of claim 3, further comprising re-training the shared model for the seeds and the content provider assigned to the cluster. 5. The method of claim 1, wherein the clusters are determined based on a distance metric, where the distance metric between a first seed and a second seed indicates similarity between performance of the model for the first seed on data of the other seeds and performance of the model for the second seed on data of the other seeds. 6. The method of claim 1, wherein the clusters are determined based on a distance metric, where the distance metric between a first seed and a second seed indicates similarity between performance of the models on data of the first seed and performance of the models on data of the second seed. 7. The method of claim 1, wherein the number of the clusters is determined to minimize a loss indicating predictive error of the shared models. 8. The method of claim 7, wherein the loss further indicates computational complexity of the shared models. 9. The method of claim 1, wherein for each of the clusters, the shared model for the cluster is trained based on aggregated data of the seeds of the cluster. 10. The method of claim 1, further comprising training a general model for all seeds. 11. A method comprising: receiving a request for predicting user responses to a content item associated with a content provider; querying a database of a plurality of shared models to identify a shared model for the content provider, where the plurality of shared models are generated by: selecting a set of seed content providers from a set of content providers, for each seed content provider, training a model that predicts a likelihood that a user will perform an interaction with a content item provided by the seed content provider, clustering the seeds into a smaller number of clusters, where the seeds are clustered based on performance of each model for a corresponding seed on data of the other seeds, and for each of the clusters, training a shared model for all of the seeds of the cluster to generate the plurality of shared models; and predicting the user responses for the content item associated with the content provider by using the identified shared model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An online system, such as a social networking system, generates shared models for one or more clusters of categories. A shared model for a cluster is common to the categories assigned to the cluster. In this manner, the shared models are specific to the group of categories (e.g., selected content providers) in each cluster while requiring a reasonable computational complexity for the online system. The categories are clustered based on the performance of a model specific to a category on data for other categories.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An online system, such as a social networking system, generates shared models for one or more clusters of categories. A shared model for a cluster is common to the categories assigned to the cluster. In this manner, the shared models are specific to the group of categories (e.g., selected content providers) in each cluster while requiring a reasonable computational complexity for the online system. The categories are clustered based on the performance of a model specific to a category on data for other categories.
A control method is disclosed for determining a quality indicator of medical technology recording results data from a tomography scan of an examination structure, which scan is supported by a contrast agent, by way of a tomography system. According to an embodiment of the invention, at least one control parameter value is automatically derived from the recording results data in respect of a contrast agent image region during and/or directly after the tomography scan, which value represents a quality of the recording results data in the contrast agent image region. A control system for such a determination is also disclosed.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A control method for determining a quality indicator of medical technology recording results data from a contrast-agent-assisted tomography scan of an examination structure via a tomography system, the method comprising: automatically deriving, at least one of during and directly after the tomography scan, at least one control parameter value from the recording results data in respect of a contrast agent image region, the at least one control parameter value representing a quality of the recording results data in the contrast agent image region. 2. The control method of claim 1, wherein the at least one control parameter value is derived by at least one of the tomography system and a contrast agent administration system. 3. The control method of claim 1, wherein the at least one control parameter value is derived on the basis of thresholds. 4. The control method of claim 3, wherein a threshold value, on which the derivation is based, comprises at least one of a minimum radiation value of a contrast agent and a minimum absorption value in the region of a significant structure of an examination object. 5. The control method of claim 1, wherein the at least one control parameter value represents a result of an object identification of a significant structure of an examination object. 6. The control method of claim 5, wherein the object identification comprises a segmentation of the significant structure of surrounding structures of the examination object. 7. The control method of claim 1, wherein on the basis of the quality indicator a signal is emitted to a user if the quality of the recording results data is at least one of sufficient, unsatisfactory and questionable. 8. A method for control adjustment of a contrast-agent-assisted tomography scan sequence of a medical technology tomography system, the method comprising: adjusting a number of control values for the tomography scan sequence as a function of a quality indicator at least one of determined in the control method of claim 1, a control parameter value derived in the context thereof, and examination data used to derive the control parameter values. 9. The method of claim 8, wherein the number of control values for the tomography scan sequence is adjusted such that a parameter value, to be expected according to at least one of a simulation and preliminary estimation, is altered in a follow-up scan scenario, essentially designed similarly to a scan scenario, as could be established in the context of the control method of claim 1, such that it represents an improved quality of recording results data. 10. The method of claim 9, wherein the number of adjusted control values comprises at least one contrast agent administration control parameter value, used for control of automatic contrast agent administration in the context of the tomography scan. 11. The method of claim 10, wherein an injection protocol for the automatic contrast agent administration is modified by adjusting the contrast agent administration control parameter value in the injection protocol. 12. A control system for determining a quality indicator of medical technology recording results data from a contrast-agent-assisted tomography scan of an examination structure using a tomography system, the control system comprising: an input interface for the recording results data; and a derivation unit to, operation at least one of during and directly after the tomography scan, automatically derive at least one control parameter value from the recording results data in respect of a contrast agent image region, the at least one control parameter value representing a quality of the recording results data in the contrast agent image region. 13. A tomography system, comprising: a recording unit; and the control system of claim 12. 14. A contrast agent administration system, comprising: a contrast agent administration control; and the control system of claim 12. 15. A computer program product, loadable directly into a processor of a programmable control system, including program code segments to execute the control method of claim 1 when the program product is executed on the control system. 16. The control method of claim 7, wherein on the basis of the quality indicator a signal is emitted to a user if the quality of the recording results data is at least one of sufficient, unsatisfactory and questionable, related to a previously defined purpose of the tomography scan. 17. A computer program product, loadable directly into a processor of a programmable control system, including program code segments to execute the control method of claim 8 when the program product is executed on the control system. 18. A computer readable medium including program code segments for, when executed on a control system, causing the control system to implement the method of claim 1. 19. A computer readable medium including program code segments for, when executed on a control device of a radar system, causing the control device of the radar system to implement the method of claim 8.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A control method is disclosed for determining a quality indicator of medical technology recording results data from a tomography scan of an examination structure, which scan is supported by a contrast agent, by way of a tomography system. According to an embodiment of the invention, at least one control parameter value is automatically derived from the recording results data in respect of a contrast agent image region during and/or directly after the tomography scan, which value represents a quality of the recording results data in the contrast agent image region. A control system for such a determination is also disclosed.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A control method is disclosed for determining a quality indicator of medical technology recording results data from a tomography scan of an examination structure, which scan is supported by a contrast agent, by way of a tomography system. According to an embodiment of the invention, at least one control parameter value is automatically derived from the recording results data in respect of a contrast agent image region during and/or directly after the tomography scan, which value represents a quality of the recording results data in the contrast agent image region. A control system for such a determination is also disclosed.
To provide a data processing device using a neural network that can suppress increase in the occupied area of a chip. A product-sum operation circuit is formed using a transistor including an oxide semiconductor having an extremely small off-state current. Signals are input to and output from the product-sum operation circuits included in a plurality of hidden layers through comparators. The outputs of the comparators are used as digital signals to be input signals for the next-stage hidden layer. The combination of a digital circuit and an analog circuit can eliminate the need for an analog-to-digital converter or a digital-to-analog converter which occupies a large area of a chip.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A data processing device using a neural network comprising: an input layer; a hidden layer; and an output layer, wherein the hidden layer comprises: a digital-to-analog converter; a first neuron circuit; a second neuron circuit; and a comparator, wherein each of the first neuron circuit and the second neuron circuit comprises a first potential holding circuit and a second potential holding circuit, wherein the first potential holding circuit and the second potential holding circuit are electrically connected to a bit line, wherein the first potential holding circuit is configured to hold a potential of a first analog signal, wherein the second potential holding circuit is configured to hold a potential of a second analog signal, wherein the first potential holding circuit comprises a first transistor, a second transistor, and a third transistor, wherein a gate of the second transistor is electrically connected to one of a source and a drain of the first transistor, wherein a gate of the third transistor is electrically connected to a first wiring to which a first digital signal is supplied, wherein the second potential holding circuit comprises a fourth transistor, a fifth transistor, and a sixth transistor, wherein a gate of the fifth transistor is electrically connected to one of a source and a drain of the fourth transistor, wherein a gate of the sixth transistor is electrically connected to a second wiring to which a second digital signal is supplied, wherein a third analog signal is output from the first neuron circuit to the second neuron circuit, is input to the comparator to which a reference voltage is applied, and is converted into a third digital signal, and wherein the third digital signal is output to the gate of the third transistor included in the second neuron circuit or the gate of the sixth transistor included in the second neuron circuit. 2. The data processing device according to claim 1, wherein the third analog signal is a signal obtained by adding a product of the first analog signal and the first digital signal to a product of the second analog signal and the second digital signal. 3. The data processing device according to claim 1, wherein each of the first transistor and the fourth transistor comprises an oxide semiconductor. 4. The data processing device according to claim 1, wherein each of the second transistor, the third transistor, the fifth transistor, and the sixth transistor comprises silicon. 5. The data processing device according to claim 1, further comprising a third neuron circuit in the hidden layer. 6. An electronic component comprising: the data processing device according to claim 1, and a lead electrically connected to the data processing device. 7. An electronic device comprising: the electronic component according to claim 6, a printed circuit board where the electronic component is mounted, and a housing incorporating the printed circuit board. 8. A data processing device using a neural network comprising: an input layer; a hidden layer; and an output layer, wherein the hidden layer comprises: a digital-to-analog converter; a first neuron circuit; and a second neuron circuit, wherein each of the first neuron circuit and the second neuron circuit comprises a first potential holding circuit and a second potential holding circuit, wherein the first potential holding circuit and the second potential holding circuit are electrically connected to a bit line, wherein the first potential holding circuit is configured to hold a potential of a first analog signal, wherein the second potential holding circuit is configured to hold a potential of a second analog signal, wherein the first potential holding circuit comprises a first transistor, a second transistor, and a third transistor, wherein a gate of the second transistor is electrically connected to one of a source and a drain of the first transistor, wherein a gate of the third transistor is electrically connected to a first wiring to which a first digital signal is supplied, wherein the second potential holding circuit comprises a fourth transistor, a fifth transistor, and a sixth transistor, wherein a gate of the fifth transistor is electrically connected to one of a source and a drain of the fourth transistor, wherein a gate of the sixth transistor is electrically connected to a second wiring to which a second digital signal is supplied, and wherein a third analog signal is output from the first neuron circuit to the second neuron circuit. 9. The data processing device according to claim 8, wherein the third analog signal is a signal obtained by adding a product of the first analog signal and the first digital signal to a product of the second analog signal and the second digital signal. 10. The data processing device according to claim 8, wherein each of the first transistor and the fourth transistor comprises an oxide semiconductor. 11. The data processing device according to claim 8, wherein each of the second transistor, the third transistor, the fifth transistor, and the sixth transistor comprises silicon. 12. An electronic component comprising: the data processing device according to claim 8, and a lead electrically connected to the data processing device. 13. An electronic device comprising: the electronic component according to claim 12, a printed circuit board where the electronic component is mounted, and a housing incorporating the printed circuit board.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: To provide a data processing device using a neural network that can suppress increase in the occupied area of a chip. A product-sum operation circuit is formed using a transistor including an oxide semiconductor having an extremely small off-state current. Signals are input to and output from the product-sum operation circuits included in a plurality of hidden layers through comparators. The outputs of the comparators are used as digital signals to be input signals for the next-stage hidden layer. The combination of a digital circuit and an analog circuit can eliminate the need for an analog-to-digital converter or a digital-to-analog converter which occupies a large area of a chip.
G06N30635
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: To provide a data processing device using a neural network that can suppress increase in the occupied area of a chip. A product-sum operation circuit is formed using a transistor including an oxide semiconductor having an extremely small off-state current. Signals are input to and output from the product-sum operation circuits included in a plurality of hidden layers through comparators. The outputs of the comparators are used as digital signals to be input signals for the next-stage hidden layer. The combination of a digital circuit and an analog circuit can eliminate the need for an analog-to-digital converter or a digital-to-analog converter which occupies a large area of a chip.
A machine learning device, which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, includes a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A machine learning device which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, the device comprising: a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit. 2. The machine learning device according to claim 1, further comprising a decision unit which decides and issues, as a command, a sharing detail of the task for the plurality of industrial machines by referring to the task sharing learned by the learning unit. 3. The machine learning device according to claim 2, wherein the machine learning device is connected to each of the plurality of industrial machines via a network, the state variable observation unit obtains the state variables of the plurality of industrial machines via the network, and the decision unit sends the sharing detail of the task to the plurality of industrial machines via the network. 4. The machine learning device according to claim 1, wherein the state variable observation unit observes at least one of a task time from start to end of a series of tasks repeatedly performed by the plurality of industrial machines, and a task load on each of the plurality of industrial machines in an interval from the start to the end of the tasks, or observes at least one of an achievement level of the tasks performed by the plurality of industrial machines and a difference in task volume in each of the plurality of industrial machines. 5. The machine learning device according to claim 4, wherein the state variable observation unit further obtains at least one of a change in production volume in an upstream process, and a change in production volume upon stop of the industrial machine for maintenance performed periodically. 6. The machine learning device according to claim 1, wherein the learning unit learns task sharing for maintaining a volume of production by the plurality of industrial machines, averaging a load on each of the plurality of industrial machines, and maximizing a volume of the task performed by the plurality of industrial machines. 7. The machine learning device according to claim 1, wherein each of the plurality of industrial machines comprises a robot, and the plurality of robots perform the task on the basis of the learned task sharing. 8. The machine learning device according to claim 1, wherein the learning unit comprises: a reward computation unit which computes a reward on the basis of output from the state variable observation unit; and a value function update unit which updates a value function for determining a value of task sharing for the plurality of industrial machines, in accordance with the reward on the basis of output from the state variable observation unit and output from the reward computation unit. 9. The machine learning device according to claim 1, wherein the learning unit comprises: an error computation unit which computes an error on the basis of input teacher data and output from the state variable observation unit; and a learning model update unit which updates a learning model for determining an error of task sharing for the plurality of industrial machines, on the basis of output from the state variable observation unit and output from the error computation unit. 10. The machine learning device according to claim 1, wherein the machine learning device further comprises a neural network. 11. An industrial machine cell comprising the plurality of industrial machines; and the machine learning device according to claim 1. 12. A manufacturing system comprising a plurality of industrial machine cells according to claim 11, wherein the machine learning devices are provided in correspondence with the industrial machine cells, and the machine learning devices provided in correspondence with the industrial machine cells are configured to share or exchange data with each other via a communication medium. 13. The manufacturing system according to claim 12, wherein the machine learning device is located on a cloud server. 14. A machine learning method for performing a task using a plurality of industrial machines and learning task sharing for the plurality of industrial machines, the method comprising: observing state variables of the plurality of industrial machines; and learning task sharing for the plurality of industrial machines, on the basis of the observed state variables. 15. The machine learning method according to claim 14, wherein observing the state variables comprises one of: observing at least one of a task time from start to end of a series of tasks repeatedly performed by the plurality of industrial machines, and a task load on each of the plurality of industrial machines in an interval from the start to the end of the tasks, and observing at least one of an achievement level of the tasks performed by the plurality of industrial machines and a difference in task volume in each of the plurality of industrial machines.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A machine learning device, which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, includes a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A machine learning device, which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, includes a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit.
A system for generating fabricated pattern data records (XDRs) based on data from accessible data sources, which comprises an XDR core module containing one or more modeling and pattern creation modules for modeling original data received from the data sources; one or more synthetic data generation modules for generating fabricated data, based on the patterns created by the modeling and pattern creation modules; a data splitting module for splitting the data into training and testing sets according to a predetermined policy; an XDR storage database for storing created patterns and fabricated data; a configuration manager for controlling the operation of the modeling and pattern creation modules and of the synthetic data generation modules; a plurality of XDR agents being software components for communicating with the data sources and accessing relevant data, using a unique API of each data source. Each of the XDR agents is capable of identifying the data-structures of its corresponding data source; transforming the data structures into a unified input structure being used by the XDR core module; a data-store communication module for mediating between the XDR agents and the XDR core modules by using data transformation.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for generating fabricated pattern data records (XDRs) based on data from accessible data sources, comprising: a) an XDR core module containing: one or more modeling and pattern creation modules for modeling original data received from said data sources; one or more synthetic data generation modules for generating fabricated data, based on the patterns created by said modeling and pattern creation modules; a data splitting module for splitting the data into training and testing sets according to a predetermined policy; an XDR storage database for storing created patterns and fabricated data; a configuration manager for controlling the operation of said modeling and pattern creation modules and of said synthetic data generation modules; b) a plurality of XDR agents being software components for communicating with said data sources and accessing relevant data, using a unique API of each data source, each of said XDR agents is capable of: identifying the data-structures of its corresponding data source; transforming said data structures into a unified input structure being used by said XDR core module; c) a data-store communication module for mediating between said XDR agents and said XDR core modules by using data transformation. 2. A system according to claim 1, in which the modeling and pattern creation modules use Model and Patterns Creation algorithms (MPCs) being capable of discovering patterns that reflect the relationships, conditions and constants of the available data. 3. A system according to claim 2, in which the modeling tasks include: state-transitions learning of a system or an individual; learning probabilistic cause-effect conditions among a given set of random variables; context-aware learning 4. A system according to claim 2, in which the synthetic data generation modules use Syntactic Data Production (SDP) algorithms to generate new and fabricated data samples utilizing the models learned by the MPCs. 5. A system according to claim 1, further comprising a Query API and a Query Processer to receive and process data-generation queries. 6. A system according to claim 5, further comprising a query cache for caching queries and query results. 7. A system according to claim 1, further comprising a User Interface for allowing interaction with the XDR core module and server-side components. 8. A system according to claim 1, in which the data sources are located locally on the computerized device that runs the data fabrication system, or on an external computerized device. 9. A system according to claim 1, in which the data splitting module splits the data into training and testing sets by using random based or time based splitting. 10. A system according to claim 1, in which the data is aggregated and prepared for further usage.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system for generating fabricated pattern data records (XDRs) based on data from accessible data sources, which comprises an XDR core module containing one or more modeling and pattern creation modules for modeling original data received from the data sources; one or more synthetic data generation modules for generating fabricated data, based on the patterns created by the modeling and pattern creation modules; a data splitting module for splitting the data into training and testing sets according to a predetermined policy; an XDR storage database for storing created patterns and fabricated data; a configuration manager for controlling the operation of the modeling and pattern creation modules and of the synthetic data generation modules; a plurality of XDR agents being software components for communicating with the data sources and accessing relevant data, using a unique API of each data source. Each of the XDR agents is capable of identifying the data-structures of its corresponding data source; transforming the data structures into a unified input structure being used by the XDR core module; a data-store communication module for mediating between the XDR agents and the XDR core modules by using data transformation.
G06N5047
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system for generating fabricated pattern data records (XDRs) based on data from accessible data sources, which comprises an XDR core module containing one or more modeling and pattern creation modules for modeling original data received from the data sources; one or more synthetic data generation modules for generating fabricated data, based on the patterns created by the modeling and pattern creation modules; a data splitting module for splitting the data into training and testing sets according to a predetermined policy; an XDR storage database for storing created patterns and fabricated data; a configuration manager for controlling the operation of the modeling and pattern creation modules and of the synthetic data generation modules; a plurality of XDR agents being software components for communicating with the data sources and accessing relevant data, using a unique API of each data source. Each of the XDR agents is capable of identifying the data-structures of its corresponding data source; transforming the data structures into a unified input structure being used by the XDR core module; a data-store communication module for mediating between the XDR agents and the XDR core modules by using data transformation.
A flexible persistence modeling system and method for building flexible persistence models for education institutions using a Markov model based on units of academic progress of a non-traditional learning program of an education institution. The Markov model is used to quantify transitions of students between the states as parameters of state transitions so that features from the Markov model with the parameters of state transitions can be extracted that are related to the non-traditional learning program of the education institution using defined flexible persistence. The extracted features can then be used to build at least one flexible persistence model for different segments of the students.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for building flexible persistence models for education institutions, the method comprising: translating units of academic progress of a non-traditional learning program of an education institution into states of a Markov model; instantiating the Markov model to quantify transitions of students between the states as parameters of state transitions; defining flexible persistence in terms of state-transitional characteristics of the students using the Markov model with the parameters of state transitions, wherein the flexible persistence indicate student enrollment from one collection of academic progress units to another collection of academic progress units; extracting features from the Markov model with the parameters of state transitions that are related to the non-traditional learning program of the education institution using the defined flexible persistence; and building at least one flexible persistence model using the extracted features for different segments of the students. 2. The method of claim 1, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating flexible sessions of the non-traditional learning program into states of the Markov model so that each flexible session corresponds to at least one of the states of the Markov model, wherein the flexible sessions are not predefined with respect to sequence. 3. The method of claim 2, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the flexible sessions of the non-traditional learning program from the Markov model with the parameters of state transitions. 4. The method of claim 1, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating competency units of the non-traditional learning program into states of a modified Markov model augmented by a hierarchical time-series tree structure so that each competency unit corresponds to at least one of the states of the modified Markov model. 5. The method of claim 4, wherein the hierarchical time-series tree structure includes parent states representing the students advancing at different speeds for a first competency unit of a course and child states representing the students advancing at different speeds for a second competency unit of the course from the parent states. 6. The method of claim 4, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the competency units of the non-traditional learning program from the Markov model with the parameters of state transitions. 7. The method of claim 6, wherein the features related to the competency units of the non-traditional learning program are based on a sliding window of time so that the features are derived from the sliding windows of time at different times. 8. The method of claim 7, wherein the features related to the competency units of the non-traditional learning program are based data-adaptive comparison basis so that the features are derived from comparison of the students who progress at a similar rate of competency unit mastery based on the sliding window of time. 9. The method of claim 8, wherein extracting the features from the Markov model with the parameters of state transitions includes using dynamic time warping for overlapping sessions with respect to time or using anchoring of the overlapping sessions for session comparisons. 10. A computer-readable storage medium containing program instructions for method for building flexible persistence models for education institutions, wherein execution of the program instructions by one or more processors of a computer system causes the one or more processors to perform steps comprising: translating units of academic progress of a non-traditional learning program of an education institution into states of a Markov model; instantiating the Markov model to quantify transitions of students between the states as parameters of state transitions; defining flexible persistence in terms of state-transitional characteristics of the students using the Markov model with the parameters of state transitions, wherein the flexible persistence indicate student enrollment from one collection of academic progress units to another collection of academic progress units; extracting features from the Markov model with the parameters of state transitions that are related to the non-traditional learning program of the education institution using the defined flexible persistence; and building at least one flexible persistence model using the extracted features for different segments of the students. 11. The computer-readable storage medium of claim 10, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating flexible sessions of the non-traditional learning program into states of the Markov model so that each flexible session corresponds to at least one of the states of the Markov model, wherein the flexible sessions are not predefined with respect to sequence. 12. The computer-readable storage medium of claim 11, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the flexible sessions of the non-traditional learning program from the Markov model with the parameters of state transitions. 13. The computer-readable storage medium of claim 10, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating competency units of the non-traditional learning program into states of a modified Markov model augmented by a hierarchical time-series tree structure so that each competency unit corresponds to at least one of the states of the modified Markov model. 14. The computer-readable storage medium of claim 13, wherein the hierarchical time-series tree structure includes parent states representing the students advancing at different speeds for a first competency unit of a course and child states representing the students advancing at different speeds for a second competency unit of the course from the parent states. 15. The computer-readable storage medium of claim 13, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the competency units of the non-traditional learning program from the Markov model with the parameters of state transitions. 16. The computer-readable storage medium of claim 15, wherein the features related to the competency units of the non-traditional learning program are based on a sliding window of time so that the features are derived from the sliding windows of time at different times. 17. The computer-readable storage medium of claim 16, wherein the features related to the competency units of the non-traditional learning program are based data-adaptive comparison basis so that the features are derived from comparison of the students who progress at a similar rate of competency unit mastery based on the sliding window of time. 18. A flexible persistence modeling system comprising: memory; and a processor configured to: translate units of academic progress of a non-traditional learning program of an education institution into states of a Markov model; instantiate the Markov model to quantify transitions of students between the states as parameters of state transitions; define flexible persistence in terms of state-transitional characteristics of the students using the Markov model with the parameters of state transitions, wherein the flexible persistence indicate student enrollment from one collection of academic progress units to another collection of academic progress units; extract features from the Markov model with the parameters of state transitions that are related to the non-traditional learning program of the education institution using the defined flexible persistence; and build at least one flexible persistence model using the extracted features for different segments of the students. 19. The flexible persistence modeling system of claim 18, wherein the processor is configured to translate flexible sessions of the non-traditional learning program into states of the Markov model so that each flexible session corresponds to at least one of the states of the Markov model, wherein the flexible sessions are not predefined with respect to sequence. 20. The flexible persistence modeling system of claim 19, wherein the processor is configured to extract features related to the flexible sessions of the non-traditional learning program from the Markov model with the parameters of state transitions. 21. The flexible persistence modeling system of claim 18, wherein the processor is configured to translate competency units of the non-traditional learning program into states of a modified Markov model augmented by a hierarchical time-series tree structure so that each competency unit corresponds to at least one of the states of the modified Markov model. 22. The flexible persistence modeling system of claim 21, wherein the processor is configured to extract features related to the competency units of the non-traditional learning program from the Markov model with the parameters of state transitions. 23. The flexible persistence modeling system of claim 22, wherein the features related to the competency units of the non-traditional learning program are based on a sliding window of time so that the features are derived from the sliding windows of time at different times. 24. The computer-readable storage medium of claim 16, wherein the features related to the competency units of the non-traditional learning program are based data-adaptive comparison basis so that the features are derived from comparison of the students who progress at a similar rate of competency unit mastery based on the sliding window of time.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A flexible persistence modeling system and method for building flexible persistence models for education institutions using a Markov model based on units of academic progress of a non-traditional learning program of an education institution. The Markov model is used to quantify transitions of students between the states as parameters of state transitions so that features from the Markov model with the parameters of state transitions can be extracted that are related to the non-traditional learning program of the education institution using defined flexible persistence. The extracted features can then be used to build at least one flexible persistence model for different segments of the students.
G06N5022
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A flexible persistence modeling system and method for building flexible persistence models for education institutions using a Markov model based on units of academic progress of a non-traditional learning program of an education institution. The Markov model is used to quantify transitions of students between the states as parameters of state transitions so that features from the Markov model with the parameters of state transitions can be extracted that are related to the non-traditional learning program of the education institution using defined flexible persistence. The extracted features can then be used to build at least one flexible persistence model for different segments of the students.
A method and an apparatus are described to classify data. The method and apparatus includes selecting a hypothesis class among entire classes. The method and corresponding apparatus generate output data with regard to the entire classes by applying a classification algorithm to input data, and modify the input data to increase a value of the hypothesis class among the output data in response to a re-classification condition being met. The modified input data is set to be new input data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of classifying data, comprising: selecting a hypothesis class among entire classes; generating output data with regard to the entire classes by applying a classification algorithm to input data; modifying the input data to increase a value of the hypothesis class among the output data in response to a re-classification condition being met; and setting the modified input data to be new input data. 2. The method of claim 1, wherein the re-classification condition comprises at least one of a value of the hypothesis class being lower than a preset threshold and a number of a re-classification iteration being lower than a preset number of the re-classifications. 3. The method of claim 1, further comprising: outputting the input data and the output data in response to the determination that the re-classification condition is not met. 4. The method of claim 1, wherein the modifying of the input data comprises: defining a loss function of the classification algorithm using the hypothesis class, calculating a gradient vector of the defined loss function, and modifying the input data on a basis of the gradient vector. 5. The method of claim 4, wherein the modifying of the input data comprises reducing each value of the input data by a preset positive value in a direction a gradient. 6. The method of claim 4, wherein the modifying of the input data on a basis of the gradient vector comprises: for each value of the input data, modifying to 0 a value of the input data in which a result from multiplying a sign and a gradient of each value is greater than or equal to a reference value, or a value of the input data, in which an absolute value of the gradient is greater than or equal to the reference value. 7. The method of claim 4, wherein the modifying of the input data on a basis of the gradient vector comprises: reducing, in the direction in which a gradient descends, each value of the input data by a positive value. 8. The method of claim 1, further comprising: generating initial output data of the entire classes by applying the classification algorithm to the received input data, and the selecting of the hypothesis class is performed on a basis of a size of each value of the initial output data. 9. The method of claim 1, wherein the classification algorithm is one of a neutral network, a convolutional neutral network (CNN), and a recurrent neural network (RNN). 10. An apparatus to classify data, comprising: a hypothesis class selector configured to select one hypothesis class among entire classes; a data classifier configured to generate output data with regard to the entire classes by applying a classification algorithm to input data; and a data setter configured to modify input data to increase a value of the hypothesis class among the output data and set the modified input data to new input data in response to a determination that a re-classification condition is met. 11. The apparatus of claim 10, wherein the re-classification condition comprises at least one of a value of the hypothesis class being lower than a preset threshold and a value of the hypothesis class being lower than a preset number of the re-classifications. 12. The apparatus of claim 10, in response to a determination that the re-classification condition is not met, further comprising: a result output configured to output the input data and the output data. 13. The apparatus of claim 10, wherein the data setter comprises: a loss function definer configured to define a loss function of the classification algorithm by using the hypothesis class, a gradient calculator configured to calculate a gradient vector with respect to the defined loss function, and a data modifier configured to modify the input data on a basis of the gradient vector. 14. The apparatus of claim 13, wherein the data modifier is configured to reduce each value of the input data by a preset positive in a direction of a gradient. 15. The apparatus of claim 13, wherein the data modifier is configured to, for each value of the input data, modify to 0 a value of the input data in which a result from multiplying a sign and a gradient of each value is greater than or equal to a reference value, or a value of the input data in which an absolute value of the gradient is greater than or equal to the reference value. 16. The apparatus of claim 13, wherein the data modifier modifies the input data on a basis of the gradient vector by reducing, in the direction in which a gradient descends, each value of the input data by a positive value. 17. The apparatus of claim 10, wherein the hypothesis class selector is configured to generate initial output data with respect to the entire classes by applying the classification algorithm to the received input data and select the hypothesis class on a basis of a size of each value of the initial output data. 18. The apparatus of claim 10, wherein the classification algorithm is one of a neutral network, a convolutional neutral network (CNN), and a recurrent neural network (RNN). 19. A method of segmenting a region of interest (ROI), comprising: selecting one hypothesis class among entire classes; generating output data with regard to the entire classes by applying a classification algorithm to input data; modifying the input data to increase a value of the hypothesis class among the output data; and segmenting, as ROIs, an area from the modified input data based on the modifying. 20. The method of claim 19, further comprising: in response to a determination that the ROI is to be re-segmented, generating new input data that comprises the segmented area that is continuous; and repeatedly performing operations subsequent to the generating of the output data. 21. The method of claim 19, wherein the segmenting of the area comprises segmenting a continuous area, of which a value is increased as ROIs from the modified input data by using a segmentation algorithm. 22. The method of claim 21, wherein the segmentation algorithm comprises at least one of a graph cut algorithm and a conditional random field (CRF) algorithm. 23. The method of claim 19, wherein the modifying of the input data comprises: defining a loss function of the classification algorithm using the hypothesis class; calculating a gradient vector of the defined loss function; and modifying the input data based on the gradient vector. 24. An apparatus to segment a region of interest (ROI), comprising: a hypothesis class selector configured to select a hypothesis class among entire classes; a data classifier configured to generate output data about the entire classes by applying a classification algorithm to input data; a data setter configured to modify the input data to increase a value of the hypothesis class among the output data and outputting a modification result indicative thereof; and an ROI segmentor configured to segment, as ROIs, an area from the modified input data based on the modification result. 25. The apparatus of claim 24, wherein, in response to a determination that the ROI is to be re-segmented, the data setter is configured to generate new input data that comprises the one or more segmented areas. 26. The apparatus of claim 24, wherein the ROI segmentor is configured to segment a continuous area, of which values are increased, as ROIs from the modified input data using a segmentation algorithm. 27. The apparatus of claim 26, wherein the segmentation algorithm comprises at least one of a graph cut algorithm and a conditional random field (CRF) algorithm. 28. The apparatus of claim 24, wherein the data setter comprises: a loss function definer configured to define a loss function of the classification algorithm using the hypothesis class; a gradient calculator configured to calculate a gradient vector of the defined loss function; and a data modifier configured to modify the input data on a basis of the gradient vector.