-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Variables not subject to Optimization #294
Comments
I think the good solution will be the optional function:
for example in conll04 graph
|
An instance-wise immutable variable can be achieved by the combination of a few sensors (so that we don't have to invent new feature). We will need a For example, class WhereSensor(ModuleSensor):
def __init__(self, *args, condition_fn, **kwargs):
# some initialization
self.condition_fn = condition_fn
def forward(self, x, y):
return torch.where(self.condition_fn(x, y), x, y) Then in the declaration we have result["pre_prediction"] = SomeModuleLearner()
result["constant_prediction"] = ConstantSensor(data=constant_matrix)
def condition(x, y):
return # whatever rules
result[prediction] = WhereSensor("pre_prediction", "constant_prediction", condition_fn=condition) |
I think, the main question is whether we want the indicator to be a fixed property for each concept or we want to be able to define whatever format or function we need on top of the existing properties and have no restriction on the name of the mutable property. In the first scenario, we can add the I think the logic or function is a better approach as we can have more complex scenarios and there is no need to reserve a keyword for |
entry = concept(name="entry") |
We discussed this in the meeting and decided to go with the |
There was also a discussion on putting a default value for the probability of variables that we do not have any prediction for but are in the form of a valid variable |
1- The main decision was that we just use the assignments to those variables as hard equality constraints and will keep them in the objective and constraints. Their coefficients value in the objective should not matter because given the hard independent assignments those terms in the objective will be just constants and will not change the objective/solution. We need to keep the variables though. |
I think maybe adding them to the optimization objective is redundant as their value is fixed.
The difference is that |
I think we discussed if we want to rely on Gurobi to do this step if it is hopefully efficient enough to remove all the redundancies. If we remove them from the objective there will be additional work because this means we should handle single variable equality constraints separately ourselves and propagate their values before giving the problem to Gurobi. |
Ok, then I think we just need to set a default for missing probabilities to 1 or any other number. I think either 1 or even 0 for the positive probability is fine. We also had this scenario that if the output of the prob is not of shape |
If we go with the above approach, we do not need to say this is not subject to optimization. All the variables are there and all we have is equality constraints. |
Yes, but we need to have an indicator that we have to add the equality constraints but |
In that case, we need something for equality constraints. To avoid confusion, maybe we should change the eqL then to filter or something. We can discuss this in the meeting. |
Hi all,
As we discussed, there is a need to define some variables that are assigned fixed predictions and are not subject to the optimization process.
We can call them mutable variables. The challenge is that these are not just separate concepts but only separate instances of the same concept which also has optimizable variables. E.g. Some phrases in the text may have a one-to-one mapping to their label which we don't want to be changed based on what the optimization tries on them while other phrases are still subject to optimization.
Another case scenario is the sudoku example where some entries are given by default and should not be changed while others are predicted and are subject to change.
My suggestion is first to add some attributes on the modeling part to variables that are mutable but not use them directly in ignoring them. Instead, let the user write random filter functions based on the current variable selection interface ( Logical interface) and set them as mutable in the process.
The simplest rule is just to ignore variables where specific
attr
of them is set to 1. However, the selection process may become much more complex and comprehensive given the interface we have.Please write your feedback on this or provide any other proposals by discussing their comprehensivity and disadvantages/advantages.
The text was updated successfully, but these errors were encountered: